doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/32092 (DOI)
|
Kita-sikai Hi, how are you doing? Hi, how are you doing? Hi, how are you doing? Hello, how are you doing? Hi, how are you? Hi, how are you doing? Hi, how are you? Hi, how are you? Hi, how are you? Hi, how are you? Hi, how are you? Hi, how are you? Hi, how are you? Hi, how are you? Yeah, a little bit, but no discussion. I'm just telling you, I'm not sure what the name of the room is called. Very small apartment with a friend. We went to the four-altern. I'm going to get started. It's one o'clock. This is a presentation about an urban planning platform. How many of you are in the urban planning business? How many people are familiar with urban footprint? Urban footprint is an open-source scenario planning platform. It's a suite of tools that let cities and regions compare alternative land use futures. It's a project that allows cities and regions to create or take the existing future land use plans. They show them on a map, look at the future data, and run analysis to see what the future land use plans are. The base year, the study year, might be 2012, for instance, when they gathered their data in a future scenario year, such as 2050 or 2030. I'll get more into this, but the goal is to see how those different future scenarios compare. My name is Andy Likuski. I'm not Garland Woodsong who's listed in the program. I'm the primary software developer for Urban Footprint. Cal Thorpe Associates is an urban design and planning firm located in Berkeley, California. We're a small team with a big product, so hopefully there's something here of interest to you. This kind of program is often called a sketch tool because it literally lets you sketch futures on the map, which we'll get into on the interface. You're going to have a base parcel on the map that shows you what's on the ground and be able to sketch a future scenario and run analytics. The project is also exciting because it is fully open source. The stack is an open source stack and I'll get into details on that. Once I'm the software guy and less the presenter planner guy, feel free to ask me questions on anything, but I'm going to be able to give you a lot of information on the software. I might have to defer a little bit on some of the planning elements of it. Cal Thorpe Associates has been around for a long time. They have decades of experience with scenario planning. You can see some of the regions that we've worked in. Our focus is on transit oriented development and compact growth, trying to meet some of these global and regional climate environmental health goals that a lot of you are surely interested in. We've been leveraging geospatial software technology for a long time with the goal of improving the efficiency and the visibility of the planning process. We specialize in regional planning and we also work with individual cities and towns. What got Urban Footprint started was a project called Vision California which took place from 2008 to 2012 approximately. This was a project that was funded by the California High Speed Rail Authority along with the California Strategic Growth Council to basically show some of the differences, how these different future scenarios could have impacts on environmental, fiscal, and health. And really to tie in the interplay with transportation and land use investments. So rather than looking at them in a silo to show how where you build can impact your either positive or negative results. We modeled data in the five major population regions of California and when you're doing this kind of geographic analysis you tend to hit some. And also when you're dealing with this many features and doing analysis you tend to hit some limits of traditional modeling software. And we actually broke the proprietary software that we were working with. We were having 12 hour long runs of analysis that would go overnight and then end up with a very mysterious error message. And so the team was actually forced into an open source stack and we're very happy that we moved in that direction. So we started with, we moved into the post GIS, Django, and we were using open layers that was the original, the Vision California product. So here's just one of the many outputs of the Vision California project. This shows, we analyzed a number of different future scenarios but this shows the two extremes, business as usual, sprawl auto oriented, business park in the suburbs kind of growth versus growing smart which is really kind of a package of more aggressive green scenarios designed to concentrate growth around existing and future transit networks. So you can see on the left a lot of that pink area is near population centers but really sprawled out. And on the right you can see much more compact growth around the planned high speed rail corridor from San Francisco Sacramento to LA and San Diego. And very happily that high speed rail project is now finally being constructed. I'll also show some slides at the end of the presentation that demonstrates the impact of these different scenarios on important analysis like greenhouse gas emissions, public health, fiscal impacts, water and energy. And please let me know if you have questions. I know some of this stuff, especially for non planners, can go right over your head so stop me if any of this doesn't make sense. So here's a look at the two software products that came out of the vision California process. The one on the bottom is actually called Rapid Fire and that's actually a spreadsheet based model. It's for non-geospatial comparison of scenarios. This was developed for regions that either don't have access to geospatial data or don't need to do geospatial comparison. They might just have a few different future scenarios that they're looking at, compact, sprawl or something in the middle and they have some policy packages. They want to compare what the results are. So Rapid Fire was designed to work without geospatial analysis. Urban Footprint, the one on the top, is the one that this talk is going to focus on and that is the web-enabled open source and geospatially obviously aware platform. So Urban Footprint, like a lot of GIS applications, starts with you got to start with your data and you have to get your data together and organize it. And then the other two parts of Urban Footprint are taking that base data, developing scenarios and running analysis. So in terms of data development and organization, our clients, which are typically the regional entities or cities, typically provide us with feature data. It's often parcels but it could be something bigger like transportation areas. And that data at a minimum needs to contain information about employment categories on the parcels or other features and as well as dwelling unit categories and probably some kind of land use code so that we can categorize it and show it on the map. They might also need to supply us with additional data such as information about what are energy usage so that we can run certain types of analysis. Other data we can sometimes infer from census data. So what we get depends on what the client has available and what we can get from elsewhere. So once you have that base data load, it gets normalized into our system and at that point the client is able to visualize one or more layers on the map. Usually they're base data parcels we would say but perhaps also transportation networks that they've given us. So that's sort of a starting point where you can review the features that you have on the ground representing a year such as 2014. The next step is to actually create or import the future scenarios which are going to be your two or more alternative scenarios that you want to analyze. Oftentimes a client will already have future scenarios on the books. A lot of, especially in California, the regional entities are required by law to come up with future plans for every few years for a certain target year. So they might have a few different alternative plans for 2040 say already on the books and then it's our job to take those plans and translate them into urban footprint. And that gives us scenario development. Now it might also be the case that you have no future plans or you want to modify one in which case you might start with the base scenario and create a future scenario and literally paint futures on the map. You select certain parcels and say well this used to be this, I want it in 2040 to be this. So one might do a query and say I want all the low density single family homes that are within this instance of a transit stop and we want to upgrade them to mixed use development for the target year. And that way you get a certain increase of employment, certain employment categories and dwelling unit categories increase whereas other ones might decrease. So either way you do it you're going to get future scenarios whether imported or made from scratch or a combination of the two. While you're making these future scenarios you have the opportunity to run analysis on them and some of the analysis is very simple. It's just saying well what's the delta from the base year to the future year? How much more employment of certain types do we have? How many more dwelling units do we have? Other analysis, all the modules shown here are somewhat more complex or very complex. So we have things like what are the local fiscal impacts? What are the public health impacts? Transportation you can do VMT analysis, vehicle miles traveled and that's a much more complicated one that requires analysis of travel networks. So I'm going to, the rest of the presentation is going to focus on these three different parts of scenario development analysis. So let's start with the software stack for those that are interested in this. Starting from the top going down the bottom on the website of things we're currently using polymaps for the maps that's transitioning to leaflets since polymaps is no longer supported. We use a JavaScript framework called Sproutcore which enables us to do a single page web app. It's very powerful model view controller framework for those of you who know what that is. Let's us do a single page app with a lot of complex functionality embedded into the system. On the server side we've got Django running, Python, Postgres, PostGIS on top of Ubuntu, usual suspects in this world. We also work with D3 for our charting and we do some Socket.io code to send messages back and forth to the client. And we also use tiles to render our tiles on the server side. Let me know if you have more questions on this I can talk forever about that. So here's a first look at urban footprint. This is a map of the city of Irvine in the Los Angeles region. It shows their parcels and it's colored by the land use codes that they gave us. So we focus of course on the map is the center of attention. We want to make sure that most of the interaction involves looking at the map. We support a limited set of features on the map such as navigation of course, but also selecting features and very targeted editing of those features. So we don't allow users to just edit any feature they want. We usually create specific editors that's tailored to their workflow. In the case of Irvine they need to be able to review their land use codes, edit them, and then comment on the edits they made. So this little edit interface on the right is doing just that. The rest of the app here you've got your various layers on the left which is unlimited number of layers they can bring in the system. We typically pre-configure layers for them at this point because of the complexity of it, but we also have the ability to import certain types of layers and export layers to various formats. We don't yet have a styling tool or a map legend tool, but that's on our short-term roadmap. So we're really hoping to have a pretty full-featured input and output and editing process for the layers in the near term. There's also a layer organized. You can drag and drop layers to get the order on the map that you want. On the top is a query editing interface. This is one of the neatest parts of the app. This is very powerful SQL-based querying of the map features and you can do all the typical SQL style querying. You can do filters. You can join any two-layer tables together geographically or by attribute. You can also do aggregation so you can sum your dwelling units or employments or average them and then you can group by other attributes. So you might, for instance, have several jurisdictional codes that represent different cities in your region. You might want to group by those jurisdictional codes and say, well, I want to know what my average employment is or break it down by certain employment categories and get the selections on the map. So these are query results we're seeing here for individual features but you can also get aggregate results and show them there and export them to CSV and other formats. And then finally on the right, I showed that before. That's the pop-out editor area but that also is the area where you can run analysis. So we do a lot of different projects for different clients based on their needs. In the case of, we just did a project for the Southern California Association of Governors, also known as SCAG, and SCAG is the largest metropolitan planning organization. In the United States, they have 190 cities in six counties so whenever you do work with SCAG, you're dealing with immense amounts of data. So for this project, it was a data review pilot and what they wanted to do was take the data that they had at the regional level and expose it to the various cities in the region. So we picked several cities out from Orange County and LA County to take a look at the data that SCAG had and check over the land use codes, make sure things were correct and if not, make changes to those codes and add comments in. So this was a neat project because it forced us to, on the fly, implement some new features like a complete user permission system. So now we have user permissions that limit what region a certain client sees. So if you log in as a city of Irvine, you're only going to see the Irvine city parcels even though it's just one server serving up everything. And we also have the ability to limit what is editable and who has administrative access. The other neat thing about this project is because it was designed to have low-level people at a certain jurisdiction making edits and then managers in that jurisdiction or the regional entity review the parcels, we implemented a data review system so that when somebody edits a land use code and comments on it, then the manager can go and take a look at everything that's been edited and decide whether they want to approve that and merge it into the master copy, which we would call the, say, the master scenario versus the draft scenario. So our focus has really been to implement features based on what our current clients need and those features, of course, because it's open source, everything is fully available to the other clients. We also implemented a really primitive data versioning system for this release which allows whenever any feature is edited, a revision is created and you can go back and take a look at your revision history. So in some ways we're trying to get a little bit toward a Git-style repository system for the feature data where we can support, we can show versions of the data and allow merging from a draft scenario to a master scenario. And this kind of thing is really important for government agencies that need a lot of, that have a lot of auditing and want to keep track of what's happening with a large number of jurisdictions. Very similar slide, I'm sorry, this is the next slide here, is actually sketching futures. This is sort of the second part of those, the three tiers I showed you. So first we had the analysis of what was on the ground and this is actually a project for the San Diego region, the San Diego Regional Planning Agency. They already had a number of 2050, year 2050 future scenarios on the books outside of urban footprint. So you can see on the top left about six different scenarios that we brought into the system and showing on the map is one of those future scenarios along, so you can see the parcels are colored along with the transit networks there. And here's a first view at some of the D3 charts we create. And these D3 charts are great because as the users update their parcels, as they paint them with new land uses, we constantly run analysis on the back end and update the charts. So each time that you paint one or more parcels, you'll see these charts updated and you can see, well, now I have a different delta between my base year and my future year in terms of employment. And we also have charts that allow you to compare one or more scenarios side by side so you can see which scenario is performing better or worse in a certain category. And then finally, you know, managing each one of these scenarios typically contains a unique set of layer features. It's a lot of data to manage. Some of the data is shared. Some of the data that doesn't change can be shared. The goal of this is to allow clients to rapidly experiment with new scenarios. So take one scenario, clone it a couple times, make some changes and see how it performs. So we're constantly trying to make the system faster and more distributed on the back end. Clients have the ability to choose land use codes when they're painting their future parcels. This is something that Cal Thorpe Associates really specializes in, a hierarchy of land use and building types. So what we do is we actually take sample buildings that exist in the real world. We model them in the system. You can see the editor in back there for a certain building. And then we combine similar type buildings into what's called a building type. And that is what a client will typically use to paint their future scenarios. We also have, when clients are painting bigger areas than parcels, we have something called a place type which allows them to combine building information with other urban forms such as streets and parks into what's called a place type, a combination of different assets on the ground. And we can, on the front image here, you can see a visualization of one of those place types. So we have some aggregate attribute information as long as well as some D3 charts based on the information. So that was kind of phase two, the scenario development. And then finally we have to test the impacts using this, using the various analysis. So again, you see public health, fiscal impacts, et cetera. These are analysis modules that are developed in-house at Cal Thorper outside, and they are all peer reviewed by the academic and scientific community and continually upgraded as new information becomes available. Since it's an open source project, our goal is to make these as transparent as possible to provide feedback when these analysis modules run so that users of the system and the public can understand how the numbers are being generated and have a reasonable amount of confidence in the results. Here's an example of one of the analysis modules. This is the vehicle miles traveled module running. This is a big one that takes a long time. We do run distributed processes on the back end to speed it up. So when you're developing scenarios, you can run this analysis at any time and get the aggregate results here. So this is showing some total VMT for the region in aggregate form. And then we also have the ability to, of course, show the results on the map as in this case, green to red going from green is least VMT near the city centers and red is the most VMT away from the city centers. And we can, of course, also represent it with specific custom VMT charts up on top there. So these analysis modules are pretty exciting because they can be updated, extended, and we're always looking to add new ones to make the system more powerful and useful to our clients. So we're also working on APIs to better expose the modules to the front end and to allow contribution and collaboration. And then finally, I'm just going to quickly show some of the results of the Vision California project. This is actually from the spreadsheet based model, the rapid fire model, just to give you an idea of how persuasive some of these results can be to decision makers. So here's, this is for California in 2050 showing the amount of land that can be saved by adopting a smart growth policy instead of a business as usual policy. Enough land saved to match Delaware and Rhode Island combined. And then similarly, here's one for VMT showing just a tremendous number of miles traveled reduced by adopting the smart growth policy. Here's one, here's a, this is a vision or this is an urban footprint result from Vision California, again, showing another VMT map, showing how in the LA region how much of the VMT, how much VMT is increased in the outlying areas versus in nearer the city centers. Here's a couple more, one showing how much water can be saved by 2050, 50 times, Hetch Hetchy. Similarly, building energy, enough power for all homes in California for eight years. Energy savings, household energy, household savings from energy and water and auto fuel and ownership savings by concentrating land use. Greenhouse gas emissions, significant savings by building more compact building, buildings closer to transit and reducing passenger vehicle miles. And there's a lot more information on these results, if anybody's interested. Then real quick, a couple next steps for urban footprint. We're really working on scaling this up for multiple users as we did for the LA region, providing customer support is a big goal of ours when you have a lot of different municipal area, municipal workers like planners and other people using the software you need to have basically 24 hour support. And then also constantly user interface enhancements. So if people give us suggestions all the time and we're always trying to make the product better. And then some other, always working to improve the analytic modules. We're working on social equity indicators, for instance, climate adaption and resilience and conservation ecosystem services. And we have a few other ones that are in the work as well. And then the last, last thing is we're really working on bringing this more to the local planning process. So when community reviews of plans, we can get the community engaged by showing the software to them and allowing them to see how their decisions are decision makers decisions can impact their quality of life. We're also hoping to help out with the general planning process for cities and regions, as well as assessing health impacts and helping with climate action planning. So there's a lot going on here. That's a super quick overview. Let me know if you have specific questions either about what we do in terms of planning or in terms of software. Yeah. So is it working? Yeah. So the land use is associated to land parcel. What if the land parts of the associates are dealing with future scenarios? There could be a huge land parcel which is now agricultural land and you would like to, and I think that the urban partner would like just to split debt into different sections. Do you deal with that? We don't support editing the parcels right now in the software that will probably be supported in the future. There are, I think, some ways to do it on the back end. So even if they can't split it up, we can bring it into the system so that it's different than the base year. So I mean, currently what the software does is just, when you were talking about painting, it means that there is a swath of land which is associated with given land use, right? All the parcels within that bounding box or whatever that they are selected, right? So whole parcels. Whole parcels, although the way the analysis works, if you were to take an agricultural parcel and turn it into mixed use development, the system is smart enough to know that that's not going to be a single parcel. It's going to analyze it on the assumption that it would be broken up into many parcels. Okay. So one more technical question. Since of course the software is multi-user, do you deal with people applying different, designing different scenarios to the same data set? We typically clone the feature sets that are going to be edited. That's where really the data review and merging process, sort of the Git-like versioning becomes really important. But each scenario, we clone the minimum. We don't want to clone anything more than we have to, but obviously if they're editing, we have to make a new copy. So we're talking about cloning the, including geometry or just the language? We, yeah, we clone the geometry too. Is that a bit expensive? It is, yeah. All these things can be optimized and it would be nice. In the future, the goal would be to have the geometry normalized and just reference it separately. We do a little bit of that and it's going to keep getting better as time goes on. Thanks. So is this, as an open source project, is it something that we, that somebody would need to go through you as a client? Or is it something that we can just, that somebody could download and implement at their city? At this point, you have to go through us. Our goal is to make it more and more friendly. I mean, it's a complicated process and it's probably going to usually need some amount of help, but the goal is, as an open source process, to make it more and more accessible. So there is more plug and play to it that we start publishing the data standards so you know how to set it up and get the data in your system. But it does run on, it runs on the Amazon Cloud. That's the way we typically run it and it's becoming, it's getting the point where it's multi. It's very distributed and so typically it requires some amount of assistance to get set up and running. Okay, well I think my time is up so thank you very much. Music
|
UrbanFootprint is a new open source scenario planning tool that seeks to revolutionize the practice of planning, with the potential to allow for a closer integration with research, public involvement, and education. Within version 1.1 alpha now complete and the next version currently under development, UrbanFootprint is a new state-of-the-art model that uses open source geographic information system (GIS) technology to create and evaluate physical land use/transportation investment scenarios. It is designed to be deployed by government agencies, private entities and NGOs. The model translates disparate data describing the existing environment and future urban development plans into a common data language, and defines future scenarios through the application of a new common set of ÔPlace Types'. The model's suite of Place Types represents a complete range of development types and patterns, from higher density mixed-use centers, to separated-use residential and commercial areas, to institutional and industrial areas. The physical and demographic characteristics associated with the Place Types are used to calculate the impacts of each scenario. UrbanFootprint represents the new standard for scenario modeling tools intended for use by urban and regional planners at the local, county, regional, state or national level. Running on a backbone of PostGIS, PostgreSQL and Ubuntu Linux 64-bit, it takes full advantage of today's hardware processing capabilities to model the impacts of future urban growth scenarios on the base (existing) environment in future years to generate outcomes for a full list of metrics, including: Travel behavior (vehicle miles traveled, transit trips, walking trips, fuel consumed, fuel cost, criteria pollutant emissions, transportation electricity consumed and impacts); Energy & Water consumption (for transportation & buildings); Land Consumption by type; Infrastructure Cost (capital and operations & maintenance); City revenue from residential development; Public Health Impacts (Obesity, Asthma, Rhinitis, Pedestrian-Vehicle Collisions, Respiratory & Cardiovascular Health Incidences); and Greenhouse Gas (GHG) Emissions.
|
10.5446/31976 (DOI)
|
das sehe ich gut aus vom Text. SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020 SWR 2020
|
Couchbase Lite is an Apache-licensed native JSON database for iOS and Android with offline synchronization support.At a hackathon last year it took a couple of hours to add GeoCouch-style bbox queries to Couchbase Lite. I'll walk through the implementation of the geo indexer and how it fits into the Couchbase Lite codebase.Also expect to be wowed by examples of the power of sync + geo.
|
10.5446/31977 (DOI)
|
from Athens, the New York Sensing Library is Angelos Zodos. Thank you, Brian. So, welcome everybody to this session. We are here to give you a very fast overview of what is our ZEO Live about and what can you do with the project. So, during the next hour, we will go through all the details and... So, what is OSZEO Live? OSZEO Live is a Linux distribution containing the greatest open source geospatial software. It is created as a bootable DVD, an ISO file, and it is possible to demonstrate all the projects by just spinning up a VM or using a DVD or running it through a bootable USB drive. So, actually, the DVD contains more than 50 open source geospatial applications, but it is not all about the software. We also include data sets, sample data sets. We include documentation which is consistent across all the projects. And also, it is very important, we have translations in many languages. So, I already told you that this is distributed as a DVD, but nowadays, mostly, we just create an ISO and you can download and you can create a virtual machine or you can just create a bootable USB and just run the system. So, sorry, the ISO can be read into a DVD. So, you can create a DVD and run OSZEO Live from it. One thing that I have to mention is that we make sure that the projects that are included in the OSZEO Live DVD are actually of very high quality. We make sure that they are stable, they are working, and we are trying our best to test every time with a new release if we have issues. And we are trying not to include beta versions or something that is not ready for having the best experience for the users. So, we have quality criteria and we want the community of the project to be very active and the software to be established and stable. And we have this page in the documentation which actually shows how much quality its software has. And so, why OSZEO Live is very good? What can we do with OSZEO Live? Well, basically, we can do demonstration of OSZEO software. OSZEO Live includes all the graduated projects of OSZEO, but also includes projects that are now in incubation or in OSZEO Labs. And it is a very good showcase of what is in the stack of OSZEO. So, we have everything in one place. It is like a product these days because this is actually, you can have the whole stack gathered into one installation and you can test with it. You can actually provide workshops, you can provide lectures, you can just demonstrate it to clients, you can do development. What we kind of not supporting is like using it for production because it is like a big installation of many software. It's not made to have a server out of it, but some people might use it for that. Also, it has great training resources included. So, this is a very good property of the OSZEO Live. We have all the resources included. We have quick start tutorials you can follow. So, if you want to know a project, you can just go to the documentation and you can follow the steps of using the software and then you get to know it better. What is important is that we have established a production and marketing pipeline for the project, which means that we are taking the software from the development teams. We are using Ubuntu as our base operating system and we are doing translations. We mix all this together into one system that can be actually tested. It can be actually given away to people so that they know what this Geospatial software is all about. As you can see here, we provide it in conferences, we provide it to workshop sessions, and we hope that all this pipeline goes through to decision makers so that to provide support for the projects into new funding or in growth of community, actually, which is very important. How do we create it? Since we are based in Ubuntu, we are trying to use the standard tools. We are using the standard process of making an Ubuntu spin-off. We use the packaging, the Debian packaging to do the base system, but also we have installers. Every project provides installers for their software where we can use them to install their project into the OSGL live. The OSGL live development is following closely with the Ubuntu GIS project and Debian GIS project. We interact with each other, we contribute, we get contributions back. Also, we use Subversion as our source control management, and we are monitoring all the issues inside track. Lately, we have been trying to do continuous integration, which is actually like building the software every day or every couple of days so that we know if something breaks, we try to spot it right away. This has been working pretty well the last few years. What is new in version 8? Version 8 actually was the project that was moving from Ubuntu 12.04 to the new LTS. We are now using Lubuntu 14.04 and actually 0.1, which was released before we released OSGL live 8. During this process, we upgraded 37 projects. 37 projects provided with upgraded installers, which we use to then install them inside the system. At this point, I would like to have a short discussion about what is OSGL live, because we always mention OSGL, but what is OSGL? OSGL is a non-profit umbrella for Geospatial Open Source software since 2006. The OSGL foundation was set up as an umbrella for supporting development and promoting the quality of open source software. It is also about open data and education. These days, you will hear about the Geo for All initiative, which OSGL is supporting, with many laboratories from many universities around the world. All these laboratories are using OSGL live to promote the open source software. OSGL live is not only about software, but we have the reputation of having very good support and compliance for standards. Most of the projects that are included in OSGL live actually implement the OGC standards. We have reference implementations included in the OSGL live, so it is a very important thing that we support the standards, the open standards. Also, we include OGC standards into our documentation so that you can actually go through the documentation and find out about what WMS is, what WFS is, and all the rest of the OGC standards. Now we are going to give you an overview of which are the projects that are included within OSGL live. We will start with DesktopJS, which is the heavy-duty desktop software that we all use to do our editing, viewing, and analyzing of our data. The first project that is included is QGIS. QGIS is very well-known, it has a large community, and it is very popular and user-friendly GIS client, where you can use many plugins that are available to do processing. It supports raster, vector data, databases, and it is excellent in creating maps. 2.4, yes. So we have the latest version of QGIS included in OSGL live 8. The next project is GRAS GIS, which is now over 30 years old. GRAS GIS is the flagship of OSGL because it all started from GRAS, so it is a very important part of OSGL and OSGL live. It has hundreds of routines and, sorry, it has hundreds of tools to be able to process raster and vector data, and it can do a sophisticated representation of 2D and 3D spatial data. Next is GVCG, which is also a very well-known desktop GIS. The development of GVCG was started in 2003 in Valencia, Spain, where the municipality decided that they wanted to create a new project and be able to distribute it to the community. So they wanted to create a new project to replace existing proprietary software that they already had, and then GVCG was created, and it is still very much used there, but it is also very much used around the world, and it is also very good in analyzing and visualizing spatial data. So next we have UDIG, which is a user-friendly desktop GIS. It is Java-based, and it is also built upon geo-tools and Eclipse development environment. So it's a very good choice for developers who want to integrate mapping into their Java applications, and it is also provided inside OSGEO Live. The next project is OpenJump, which is a spin-off project of the original Jump project, which was also open-source, but it got forked a lot, so after a while people decided to merge back all the forks into a common open-open project, which was named OpenJump, and it is very good at analyzing and displaying data, and also deals with good topological tools. One of the forks of the original Jump project was also CosmoGIS, which is also included in the OSGEO Live distribution, and it is also well maintained by a strong Spanish community, and so it is also very popular, and it is used very much. The last of the desktop GIS that we include in OSGEO Live is Saga, which is a German project initially. It stands for System for Automated Geoscientific Analysis. It has a large number of modules for analyzing vector, table, and grid in image data, and it is very strong at doing geostatistics, image classification, projections, and hydrology, landscape development, and terrain analysis. So most of those tools are also available into other projects like QGIS from a plugin, and so also the same stands for graphs, which is also included as a plugin within QGIS. So all these desktop GIS applications are included in OSGEO Live 8. Let's move now to the browser. We are going to give you an overview of what kind of GIS web frameworks we have, and first of all, Open Layers is included, which is a browser-based mapping tool which supports JavaScript, and it is easy to install. You don't have any server-side dependencies, so in OSGEO Live, many examples are included and you can make your own application right there. You can just create an application based on the examples that are available and you can demonstrate Open Layers. Currently we have Open Layers 2, Open Layers 3 was released a few days ago, so I think it's going to be in the next version. Next we have Leaflet, which is the mobile-friendly interactive maps framework. It is also a JavaScript library and it is designed to work around old browsers and actually it supports mobile platforms. The main idea behind Leaflet is simplicity and performance, and it has usability in mind, so it's a very good framework to do maps. Next we include GeoMaias, which is a browser-based GIS client. It is a range of spatial tools included into a thin browser mapping framework, which you can use through Java and through use of GeoTools to make processing on your browser. We include a demonstration application, which is based on the library, and actually you can actually create your own application using the library. Next we have MapBender, which is a GeoPortal application. Actually this is MapBender 3, which was upgraded recently from the previous version we had. It is very useful to publish, register, view, navigate, monitor, and grant secure access to the users and to create a very nice spatial data infrastructure. Next we have GeoMools, which is a mapping framework built upon open layers and map server, and it is very useful for managing spatial and non-spatial data within counties, cities, and municipal offices. It provides services for viewing and organizing many layers and doing data-set searches. The next project we have is Cartaro, which is a content management system. It is based on Drupal, and it builds upon Drupal to offer maps, and it also uses GeoServer, which we will cover later. The next framework is GeoNode, which is also a content management system. The difference between this and Cartaro is that this is based on Python. It also provides a nice way of publishing, uploading data through your browser, and filling in metadata as a UI for a CSW, and it also has features like user ratings and user comments, and also sharing maps between users. This is also a content management, but with some extra features. Now we go to the web services. This is the family of projects that we use for building the spatial GeoServices that we use around many deployments around the world. In many portals, many GeoPortals these days use those projects to build as an infrastructure. It is very important to go through them. The first is GeoServer, which is one of the most popular web service applications. It provides WMS service, WFS, recently catalog, and web processing service, WPS. It comes with a nice UI so that you can actually manage the data through your browser instead of editing text files, and it can connect to many data sources at once. One of the very first projects of OS Geo is MapServer, and it is one of the most stable and very well used around the world. It also is a software to provide open source web map service initially, but then it also provides WFS and web covered service, WCS. It connects to a wide range of databases and data stores. It is written in C, and it also supports other languages like Python, Ruby, and others. The next project is Degree. It is a very robust application, and it has been the reference limitation for many Odyssey specifications. It also includes transactional web feature service and three-dimensional support for data. Also in OS Geo Live 8, we include NCWMS, which is a web map server, and it focuses on multi-dimensional data, environmental and weather data, and it provides OTC WMS interface. Next is EOX Server, which is a web covered service. It is a system for accessing large amounts of satellite and Earth observation data. It is possible to select subsets and use a slider to go through time dimensions, so you can actually find data from a catalog. Actually, EOX Server is now being used from ESA, and it is going to be used for the next portal of ESA for being able to locate satellite data. The next project is Geo Network. Geo Network is the first catalog that was included, that graduated OS Geo Graduation Program. It is a CSW catalog. It is compliant to Odyssey service, and they can be used to create, maintain, and search metadata about specific data sets. Metadata is data about the data, so you have to store that information like creation date, author, title, and stuff like that, and then you can do searches upon the data in order to discover your data. We also provide PyCSW, which is the reference implementation of OTC actually for CSW specification. It is also a CSW server, and it is written in Python. The difference is that PyCSW is a lightweight server instead of a full-blown UI application, but it is very, very well used these days. It is used in data.gov, and it is used in Fromnoa and other organizations and other countries actually. It also stores XML files into a database, and it can provide, you can actually make queries against the server to discover your data. Next is Map Proxy, which is a proxy server for WMS and Tile Services. So Map Proxy is standing between WMS or another data source, and it can provide casting capabilities to a deployment you might do with other OTC WMS servers. So it is used to speed up a web application in terms of providing tiles that are cast and generated when you request them. The next project is QGIS Server. Actually, this is part of the QGIS project, which provides QGIS desktop. So now we have also a web map service, which is included in QGIS. So the good feature is that you can create a map inside QGIS, and then you can actually publish the same map to your web server. And you get a map that will look exactly the same as in your desktop. Next is 52 North WPS, which is a web processing service. It is a Java-based project, and it is used to provide access to geospatial processing algorithms through the web. So users can actually do spatial analysis remotely. Also, we have 52 North SOS, Sensor Observation Service. This provides a standard way of, a standard interface for reading live and archived data, which are captured by sensors. So you can have satellite data, you can have cameras, or you can have water level meters, or any other sensor that you can imagine. And you can actually pull data from those and serve those data through the web. The last web service that is included in OSGolive is Zoo WPS, part of Zoo project, which is a developer-friendly WPS framework for creating and chaining web processing services. So actually in Zoo, you can have your own implementation of an algorithm. You can just plug it into Zoo, and Zoo will create the WPS interface for your library or for your function, and then you can use it remotely. It supports C, Python, Java, and C-SARP, and many other stuff. So now we move to the data stores. We move to the bottom of the stack where our data are stored. Well, this project doesn't need much of an introduction. It is used everywhere. It's PostGIS. It's the spatial extension of Postgres SQL. It's very well known. It allows users to actually do analysis and provide support for web map applications. It provides support for desktop. You can store your data in PostGIS, do analysis, do querying of the data, and everything you can imagine within. It is very fast. It is stable, and it is very, it is standard compliant, and you have hundreds of spatial functions out of the box. So we also include the NoSdoLive spatial light, which is a lightweight database, and it's based on the popular SQL light database. So SQL light is self-contained. It is a relational database, which is one file. So you actually have one file, and it provides all the SQL functionality, which you can take away on a USB stick, actually. You are able to create actually those kind of files from our desktop applications, too. The next database is Rastaman, and Rastaman is all about raster data. It's actually a raster management system. The main advantage of Rastaman is that it stores raster data within an underlying database, which can be PostGIS, for example. It is a multi-dimensional raster data management. So you can have time series, you can have hyper spectral data, you can have any kind of raster data, and you can do SQL light queries directly on the database. The last project from this group is pdrouting, which is the routing extension for PostGIS. Somebody can actually do queries, finding the shortest path between points or within a database, and simplifying... This is simplifying the routing functionality and maintenance of your data, so you can actually upload your points there and do any kind of routing between them. The next category is navigation and maps. Here we have GPS drive, which is a GPS navigation system. It can be used for a car, a bike, an airplane, whatever you can think of. It can display your position provided by a GPS, so you can actually install a GPS to your laptop and use GPS drive to see your position. Also, we have prune, which is a tool for viewing, editing, and converting coordinate data from GPS devices, and you can also show them on the map. Next is marble. Marble is actually 3D virtual globe. It is similar to Google Earth or NASA whirlwind. It was developed as part of the KDE project, and it now supports various data sources. Somebody can just zoom, pan, and even look at the Wikipedia descriptions of places from within the application. A nice feature that I personally like is that it has historical maps, so you can find layers that are very old. The KDE project is often CPN, which provides free navigation software to be used from ships. It has been developed by a team of active sailors, and it is used in real-world conditions. We also have OpenStreetMap tools, which means that we have applications which are used with OpenStreetMap database. Those applications are Jawsome and Mercator, and also we provide links to the online version of OpenStreetMap tools. Actually, within the OSGEO Live DVD, we extract a small part of the OpenStreetMap data, and you can do editing offline of this data, and then submit it back to your changes. Next, we have Viking, which is also a GPS navigation system. It can do analysis, it is also a viewer, and it can use OpenStreetMap as background. The next category is we have grouped together a category called spatial tools, which is tools that have specific analysis capabilities. For all, we have GeoKettle, which is an extract and transform load software, which is ETL. It is a spatially enabled version of Pandaho data integration, and compares with FME in functionality. It can be used for producing automated and complex data processing chains. Next, we have GMT, which is a collection of tools that allows users to manipulate point data and do filtering, trend fitting, gridding, and other reprojections and stuff like that. Also, MapNIC is included in OSGEO Live, which is a Python tool kit, sorry, C++ tool kit for rendering beautiful maps with clean and soft edges. It was initially used for OpenStreetMap rendering. It has features like intelligent label placement, scalable SVG symbolization, and other stuff. Next, we have TileMail, which is a design studio for creating beautiful web-based interactive maps for a wide range of existing spatial data sources. You can actually create a map within TileMail, and then you can upload or you can create a tile with it, and then you can upload them to the web. We also include MapTiler, which is another software to create tiles, which then can be stored into a file system, or they can be directly published to a web server or a cloud storage in order to be able to have tile sources to your application. It can be used with OpenLayers and Google Maps, and it can be easily customized. The next project is OSIM, which is the open-source software image map. It is a high-performance engine for remote sensing and for image processing, and it can be also used for GIS and photogrammetry. This is also in form of a library and of some applications that have a UI, and you can actually do photogrammetry with it. The next project, which is also based on OSIM, is Offer Toolbox. Offer Toolbox is a new software which offers high-performance image processing capabilities to the OSDIO Live. It is funded by the French Space Agency. It is part of a big project, the Pleadas Project, where they offer this software with their Earth observation data in order to have a free tool and open-source tool to process the data. It supports Optic and reader images, and it has very state-of-the-art algorithms, because it is based on ITK, which is the inside toolkit. So it has very, very new software, like change detection, pattern matching, and stuff like that. Also in OSDIO Live 8, we include R, the R language, which is a powerful language widely used for software environment, widely used software environment for statistical computing and graphics. We also include many extensions, many geospatial extensions inside the DVD, so you can actually process your data and visualize the results. Next, we have created a group which we call Domain Specific GIS, and it's about applications which are targeted at specific domains. The first one is Sahana, which is a web-based collaboration tool, which is all about collaborating when a disaster happens. It offers features like trying to find missing people, managing volunteers, tracking camps efficiently between government groups and similar functionality. It was initially created in Sri Lanka during the 2004 tsunami. It was used there, and it has been developed since then. The next project is Usahidi, which is a platform that allows everyone to submit data through SMS, email, or through the web, and it can visualize it on a map or a timeline. It has also been used in cases of emergency since 2008. The next project is OSG Earth, which is a scalable 3D terrain-reddering toolkit for open scene graph. Actually, you can create a nice 3D visualization by just editing a simple XML file. The next project is MB System, which processes and displays bathymetry and scatter imagery data derived from multi-beam interferometry and side-scan sonars. Next, we have ZYGRIP, which is an application to show maps about weather. You can actually download and visualize weather data from ZYGRIP data sources. It also offers access to historical data. I already told you that we include data in not only software. Here we are going to discuss about what kind of data we have. The first source of data is the Natural Earth project, which provides public domain maps for creating small scale worlds, regional and country maps, at a range of scales. This data set is already included in OSG-O Live, so you can use the saved files to create your own maps within the DVD. Next, we have the North Carolina Educational Data Set, which is bundled inside OSG-O Live. It provides raster data, vector data, a watershed model, elevation maps, land use and land cover, and some sample of Landsat 7 imagery. Also, we have OSM data, OpenStreetMap data. Every time we just take a snapshot of the city where software Z is happening, we have a small sample of Portland data set included in OSG-O Live 8, and you can extract the data, make modifications directly inside OSG-O Live. The last data set that we have is the NetCDF time series data set, which includes annual maximum daily temperatures, annual maximum consecutive five day precipitation, since 1850. Okay, so we have all those applications, but how about development? We have included in OSG-O Live all the geospatial libraries that are very common for doing development. First of all, we have GDAL and OGR, which are best known for providing access to vector and raster data, and they are used by many open source, but also from proprietary applications. Also, you can use the command line tools within OSG-O Live to do processing and transformation of files. So this is also available. Next we have Java topology suite, where it is a Java library of spatial predicates and functions for processing geometries. It is used by many Java-based open source or spatial applications, and it also provides some robust implementation of fundamental algorithm where you can use to create your own application. Also, we have Geos, which is the port of JTS to C and C++. So you can do the same stuff with your C and C++ language. We also include GeoTools, which is very used within the Java world, and it provides standard-based geospatial data structures, connectors to many data sources, and functionality about data manipulation and rendering. Also, we include the MetaCRS project, which is actually a collection of five different projects, which provide algorithms to transform between different coordinate systems. Next, we have LibLabs, which is a C++ library for reading and writing LIDAR data. So it is also used as a library, but it also has some command line tools that you can also use. We have Iris, which is a Python library that can be used for analyzing and visualizing meteorological and oceanographic datasets. And our latest addition to GeoLabs is IPython, which is a web-based interactive environment where you can combine code, execution, text, mathematics, plots, and any other kind of rich media into a single document, and you can show that online. So actually, this is not a purely geospatial project, but it can be used with all the Python libraries that are also included in OSGEO Live. So it is a great tool to showcase the geospatial Python libraries from within OSGEO Live and to do also development. Actually, we have a sample IPython network inside OSGEO Live, but also you can pull other notebooks from GitHub and use them directly inside OSGEO Live. So there are also a few applications that we have installers for, but they are not actually installed inside OSGEO Live because they don't run on Linux. The first project is MapDite, which is a web-based platform that enables users to develop and deploy web mapping applications and geospatial web services. Actually, this was ported to Linux, right? It can run on Linux, but it didn't fit the DVD. And also we have MapWindow, which is a desktop client, which is written in.NET, and it is a GIS desktop application. So these were the projects that are included in OSGEO Live. So the next question is how can we contribute to the project and how can we join, what can we do for OSGEO Live? So first of all, you can subscribe to our mailing list and introduce yourself. You can attend an IRC meeting. You can be part of the team. We need translators. We need testers. We need developers. I'm pretty sure we can find something that you can do. So what can you do? Actually, you can either improve the website, the documentation, or you can actually help improving the live DVD itself if you are into very core Linux development. So how can you improve the OSGEO Live documentation? First, you can review the quick start tutorials. You can test and you can always provide feedback. If something has changed, you can go directly to SVN and change the actual tutorial. And also, you can actually learn. It's a good opportunity to learn the software by just running the quick starts. Also, it's very important to keep translates up to date. So if you speak another language other than English, you can offer your knowledge to translate the project. This is a very good thing to do, great activity that can happen in local chapters. So we need also help testing and bug hunting. We need to locate bugs. We need your help. There are so many applications that we... It's very difficult for the core development team to actually test everything. So we need you to help us test. And also, you can contribute by adding your own software if you have created one. You can actually go to Wiki and find... We have guidelines on how you can contribute back to how you can include your own software. So in general, we need you to help us. And how can you help us inside Force4G? If you are here on Saturday, please join us for the cold sprint, where we can find a way to improve OSDOLive and work towards OSDOLive 8.5. And we can do translations, we can do testing, reviewing. So these are the links that you need to follow if you want to get involved. Everything is included here in this list. So as you see here, we have a large number of contributors and people who have been involved. And we thank all those people for their help creating this OSDOLive project. We would like to acknowledge our sponsors, which offer either their servers or their time for making this project happen. So that's about it. Any questions? Yeah, please. Hello. Here, the microphone is on. Hello. Before the last five minutes with questions, thank you so much, Angelo for the encyclopedic overview of the OSDOLive. We've just seen an A to Z of all open source mapping software you could ever think of and things that you've never seen before. Two things, two points I'd like to make very quickly before we go to Q&A about the purpose of the OSDOLive or its use in the field. One is a communications use, and the second is to build your own systems use. The communication side, this DVD was originally developed specifically to have low spec hardware for people that are out in the field, perhaps in a Spanish speaking area or where they speak Indonesian language or somewhere far afield, where you can boot low spec hardware with the DVD and very quickly have volunteers doing field work. As a communications tool in a more urban environment for people that have not very much time and haven't seen this huge array of software before, I want to make the point that every one of those projects has an 8.5 by 11 professionally prepared presentation overview page and a second 8.5 by 11, which is in a matching graphic style that takes an intelligent person who has otherwise not seen the software and is through step by step series of how you could try the software for the first time. The second part is the implementation for your own computers at home or in your lab or in your school or at your business. All of the install scripts for every one of those applications are open and published in the SVN repository. If you take standard hardware with the Ubuntu operating system base, you can take those scripts in most cases almost no preparation and run the script and install the software for yourself. So it serves as a reference implementation source or installer tool. Anyhow, okay, so with that we have about four minutes left for questions and answers on this encyclopedia of software here. If you do have a question, please use a mic in the aisle way. There's a couple of mics set up and I'm happy to take questions. Okay, yes, Margie. What is the observation that I didn't see in the IPathon? The IPathon notebook is a relatively recent edition and there are several places where the exhausted list is printed and we may have missed a spot. So maybe it will catch up on that. I think we didn't include it in the list because it's not purely geospatial because it's more like a development tool. So I think that's the reason why it was not in the list, but we can add it. Thank you for the feedback. Any other questions? Any other questions? Alright, well we'll wrap up a little bit early. Thank you for your attendance.
|
<p>This presentation provides an overview of the breadth of quality geospatial open source applications, which are available for the full range of geospatial use cases, including storage, publishing, viewing, analysis and manipulation of data.</p><p>The presentation is based upon documentation from OSGeoLive, which is a self-contained DVD, USB thumb drive and Virtual Machine, based on Lubuntu GNU/Linux. ÊIt includes over 50 of the best geospatial, open source applications, pre-configured with data, project overviews and quick-starts, translated into multiple languages. It is an excellent tool for demonstrating Geospatial Open Source, using in tutorials and workshops, or providing to potential new users.</p><p>This presentation is very useful for anyone wishing to gain a high level understanding of the breadth of Geospatial Open Source available, and is often presented at the start of spatial conferences to help attendees select targeted presentations later in the conference.</p>
|
10.5446/31978 (DOI)
|
Yes, absolutely. The you you you you thank you for coming to this presentation. This is not a serious map by the way. Unless you live in a Dimeboks or Old Dimeboks, Texas. I don't think anybody does, so... Anyway, and also thank you for on the last day of the conference coming to a presentation that contains the word statistics in the title. And I tried to come up with a synopsis statement and then I, whatever I had under here, I thought it, if my title hadn't put you to sleep, this would. But basically, and I have to admit that I was somewhat scatterbrained about forming this, I keep coming up with different angles to try to follow about this subject matter, whether to focus on specific apps like MapServer, which is where my experience is. Can y'all hear okay? Okay. Oh, thank you. How's that? Yeah, I can hear myself now. Anyway, where this started was, or my background and why I want to study this is I usually work for my maps and MapServer with post GIS data sets and I found myself spending more time than I wish on trial and error, particularly around styles development and around how they relate to class breaks, how they relate to the scale ranges where the scale max and min denominations are and so forth for different classes and for different layers and so forth. And also I work with a data set. It's common in the United States, the US Census Tiger data set, which is an all-USA resource which is comprehensive but error prone and in particular, one of my conditions I run into a lot is where if the classification system is either just plain poor, it's either not applied well, like it has errors and blanks and inconsistencies or it's not really directed at the task of mapping as well as I would like. Hold on. That's not what I wanted to do. Try that. All right. And also this, even though what I've been studying is somewhat similar to thematic mapping, that's not really what I'm after. I'm after more styling for a basic base map. And so I'm not specifically telling a particular story like, I don't even know, I stole this from some Google Images page. Whatever theme they're doing here is very specific. That's not really what I'm angling at. I'm angling at really how to tell the best base map story so that important features are emphasized over not so important features and so forth. And so, oh, this is, let's see. You know, I was talking about the classification schemes that are already in the data. I've mostly mentioned these points already that you may find that the classes you want for mapping may come from a multitude of columns in your table. And various expressions of that are, let's see. Or I've kind of cases where the priority scheme just doesn't give me a clue about how I would want to prioritize. And so what I've found in researching this is that there's really two sides of the equation. One side is the setting side. Basically what are you setting? Like, and again, I'm usually working in Map Server. I'll show you in a minute some of the work I do in there. But also, I'm basically trying to focus on some really some troubling problems where in my work with trying to set up a map file for the base map portion, I'm finding that my solutions don't fit everything perfectly. That I'll come into conditions where the condition of crowding or how I've stacked the layers or where I've set the scale breaks or how I've classified or what symbols, what fonts, what you know, everything I'm using, they apply mostly and then I'm finding that they break down in certain spots. And if I try to analyze the spots, I find it's a very, sometimes it's a subtle condition that's hard to describe, even harder to describe a solution for. And there's a series of what I call hazards or limitations to trying to set up the scales and such. For example, your dataset, you may have constraints on your data where you cannot alter it or you may have a mapping scheme that you're required to follow where the number of scale breaks and so forth are already set. And so you can't doctor that. But and one of the things I've found with the Tiger dataset and particularly working with road data is their original roads are partitioned not really for the purpose of mapping but for addressing maintenance that in order for them to collect their census records and to provide the foundation for their demographic mapping, they've partitioned the streets at intersections with each other and with other features. Which is nice for record keeping but it's not so great for mapping. You get more segments than you really wish. Now they also model in recent releases of Tiger, they model roads more in a map friendly way, although it's got its own partition. They partition at U.S. county boundaries and other things that I might not wish to see in some of my map views. And one of the things I did is briefly went through some of the existing projects outside of Map Server to see where there are some clues about how to use the statistics about the data in order to classify. I captured a screen here from QGIS where in order to set colors it's a one of their color schemes, it's called graduated where you can assign several different schemes to class break and you can pick the number of class breaks and then you can pick one of their color schemes. In this case there was a green, I guess it's a green saturation scale and this particular one is called natural breaks where it tries to find the separations that are inherent in the data set and in order to make the groupings that you choose follow the natural groupings that are inherent in the data. And then I've also looked, I've not studied, I'm not a Geo Server person so I have not extensively studied Geo Server but I did find a few things in the documentation that I thought would give some help in trying to assign data values to style values. And Map Server has a feature that I've not used before this that's not well documented either, it's where you can specify a color range and what the data values are that are assigned to that and what data column that is. And so in order to, these are the basic types of style and layering and other settings that I have to work with. In generic terms it's the order in which you layer and for purposes of map quality I think one of the things that you need to analyze is what particularly happens when a particular layer covers another layer. What's going to be the consequences of I'll say your city outlines layer placing over on top of your roads or something like that. What are the cons, where will it matter, where will it not? And again the classification breaks how you use class particularly like in Map Server how you use the class scheme in order to symbolize over different classifications. And where you set your scale ranges and then of course how you, what line weights and line styles you pick for emphasizing and the same for symbols, the sizing of your symbols, which symbol, what effects of shadows and so forth to help you emphasize. And then a particular note about, oh here's the, sorry the list of the Map Server words that are affected by making these classifications and the yellow ones are the ones that I think are the most primary important. And the greens are actually like max features at the layer level and so forth I think are more like reasons you don't need to, you can actually use some shortcuts in like for example max features says you know I'm only going to show, I'm only going to show X number of features in this, in this, in this layer and so forth. And a particular note toward color I think there's a whole world of, I'm discovering more and more about that here in this conference about I guess you call it color perception and what the effects are of color settings and how they affect, how they, how do they present emphasis with colors. Now this one resource that I've been investigating called Color Brewer I think it's fairly well known and it's a worksheet where you can, similar to what I was doing in the, the, the similar I was doing when the Quantum page before where you can pick the number of, of breaks and on what basis and in what color scheme. And something just a little note about labels is the fact, it's another thing is that you know like if you double your point size you're effectively using four times as much pixel space to represent the, the label. And so basically I was studying how to, like other, the city labels in particular I, I was sort of one of my main guinea pigs for this because the, because the US city names to me anyway you know I'm fairly familiar with them. If the emphasis is wrong in them I'll see it pretty fast. And like you, it's probably pretty hard to see on here but this was a simple scheme where I collected the standard deviation of the populations of the US cities and, and used, actually I used the standard deviation of the square root of the population value turned out to be a pretty good foundation for differentiating the populations and the importance of the cities on the map. And another case I've had before where, where doing a one size fits all type classification scheme doesn't always work. This is the city of Austin, where I live, where I used a, the S1400 class in the Tiger dataset is basically residential and small use, moderate use rural roads and so forth. But they don't classify it very well within that, namely the, what I'd call major thoroughfares are not separately classified. You have to either pull them out through their names such as, you really, that's not reliable. You can't say all the avenues and boulevards and parkways are the important roads. So this, I simply tried to have a length threshold that said any road long, any S1400 road longer than I think three kilometers, I will classify as a major thoroughfare. What happens to work in much of the United States, particularly the south, anything that's hilly or anything that's kind of outside the central plains where the roads curve, you don't get minor roads that are especially long. But I just did a Portland map here and you see the dark. This contrast is pretty poor here, but I've got the individual roads and the light gray over on the left side where because they've curved, the, my scheme works out pretty well that longer roads are highlighting more successfully. Whereas over here in the older part of the town, there's too many dark lines. So many ordinary thoroughfares, I mean ordinary residential streets happen to be as long as the thoroughfares and my scheme is breaking down here. And so the point being is that it's, in many cases, if you're trying to do an analytical study about how to classify roads according to their length, it's going to vary from place to place and a single scheme is not going to work across the board. And I've been seeing opportunities for how to use analysis of the data not only within a layer about how to classify, but between layers, get some clues about how you might want to stack your layers. And then I would also say opportunities for using analysis to prescribe the scale breaks. And so what I'm kind of a point where I rather than try to doctor, say, a map server internally to take on some of these kind of capabilities, I think it's more appropriate to try to model outside. For example, I'm learning more and more about scribe here at this conference and it seems like it's a more suitable vehicle to package statistical calculations about the data as part of its process. And if I were to create a pie in the sky description of a purely outside app that would help prescribe styles from data analysis, I would, of course, make it integrate well with the individual products like map server and actually be able to have access to the data source like if you're in post GIS that you could actually, if you needed to for performance sake, set a classification value by adding a column to a table that didn't have it, that you'd have the means right there to do so. And click into the state of the art color perception worksheets such as the one I was showing earlier, the color brewer and integrate well with the symbol creation worksheet and line styles font worksheets and be able to display graphs of the queries that you're doing on your data. So, anyway, thank you very much. I just had a comment. Some of that's on your wish list that is actually packaged in that map manager, the Windows product for a map server. So it's pre-packaged there. Thank you.
|
Map style, label, and visibility rules, especially those aimed at differentiating "important" classes of features from "minor" ones, can be derived from statistical functions performed on feature attributes. If the source data classification scheme is not already strong in prioritizing features how we want to view them, then style patterns may emerge from calculations over an assortment of counts, sums, averages, and other measurements. We will begin with a quick examination of popular open source web and desktop mapping engines -- do their configuration capabilities include formal constructs for deriving rules from statistics? Or must the developer arrive at "this looks right" through trial and error? We'll extend the discussion to specific data distribution patterns that can be exploited for styling. We're accustomed to setting line styles, symbol and font sizes, colors, and visibility at different scales. The bell curve resulting from a query may point us to where we make the scale breaks, or toward how much color or size contrast to employ in order to make the best presentation from the particular data we are displaying. Perhaps we can arrange our queries, thereby grouping our features a certain way, to aim for an "ideal" curve that is already known to produce pleasing results.A simple set of query tools for streamlining style assists from statistics will be used to create a few examples from troublesome data.
|
10.5446/31589 (DOI)
|
And Sao Paulo, just to contest to allies, Sao Paulo is one of the states of Brazil, the biggest, not the biggest but the most, with the biggest population of Brazil. And our work is related to mainly in the metropolitan area of Sao Paulo, that you can see that the population there, it's very huge, about 20 million people living in this area, so very populated and very, with a lot of problems with the flooding, which is our main concern in this area. Just to contest to allies, in which situation we use the GIS processing, I'd like to talk about Sao Paulo, the problems that we have during the summer. This is a typical day in the afternoon in Sao Paulo without rain. And this is a bad day in Sao Paulo, in the afternoon after a severe event of rain. So the same situation without rain, with rain. It doesn't happen every day, but sometimes it happens and it causes a lot of problems because imagine a city, Sao Paulo has 12 million people. So the traffic, normally it's a big problem during the harsh hours, but with this situation it becomes even worse. So Sao Paulo, we have some tools to deal with this situation. We use weather radar to do now casting and for some actions to be taken in the events. So we have, nowadays we have SBAND radar with a very nice accuracy. So this is the most important tool that we have to deal with this problem. So this is a typical information of radar. There's on the left hand side we have the scale, the hotter the color, the more intense the rain, and the lighter the color, the lower intense the rain. We also use some ground stations. We use to measure the level of the rivers and the intensity of the rain using some equipment whether, installing the weather stations. For doing this in metropolitan area we have about 300 stations. We use GPRS to transmit the data or some, in some cases, a satellite. You can see some charts here. The first one is the accumulated rain in time and the second one is the level of the river. This is good to, it would be good to realize that we have some levels, warning levels cataloged previously like attention alert, emergency and flooding. So in each case that's the water which is this level. Some actions must be taken by civil defense and governmental institutions. It's just a river cross section sample in the river in a regular situation. This is river in a hard situation near the flooding level. Our main customers are the governmental agencies like water and energy users and the civil defense, some civil construction and the traffic engineering also, they use this kind of information. São Paulo is here, is divided in many sub areas and we do this kind of monitoring for each sub area. So we have the color, I scale off color which shows that when it's yellow, it's in the tension and red, it's in alert. So at this point I will pass for my colleague Ivan to continue and to show how to use the, how we use the GIS tools. Hello, morning. In this map you can see that we get this map from GeoServer and we have some static information and some dynamic information. The states are feed to the GeoServer to change the color depending of the status of the area. Our infrastructure is based on virtual machines, locally virtual machines on VMWare and remote virtual machines on Amazon. We use Linux with basic center OS with varnish for getting performance and cache because we have a lot of traffic mainly in that site you saw in previous slide. That site we have one day with more than 2 million hits in one day and when the information is created online, it spends a lot of processing and we now use each processing gets cached by one minute. Then we have some delay on the update but just one minute, that's not too much but gives good performance with lower machines. We use Java with Apache Donkets 7 and GeoServer and our main database is the DB2. We have also Postgres and Postgees but for smaller things. Our current database for telemetric data has 260 million records, it's almost 47 gigabytes of data. The structure is based on varnishing in front of our servers, all of servers and then our request passes on varnish to get cache and gives good performance. All the services are virtual and they are redundant. In this slide you don't see, you only see the one box for each service but they are redundant. What we use about the GIS services and open protocols, we use WMS, WFX for local data and external data. We got some data from outside also. We create on GeoServer and GeoTools images for publishing directly and to use internally based on shape files and our products. We use also image maps generated on GeoServer to create hints, show hints about the places where the user puts the mouse over. Here we have an example about how we use GeoServer to simplify our work. We have a product that show a contour of the rain and it's the only one product generated by the radar. We have that as one shape on the GeoServer and we have some filters to show it as many different layers for the user. And also we use CSS and SLD for defining the structure, the display and for that product I will show the image that shows that we have a contour on the rain cells and a small point in the middle of the cells. That is the case that I show on the XML before. Here we have all the cells from the past. You can see on the right the check box on CTR passado, that one, and we index the slides that we are changing. The current selected now, we have the forecast and here we have all then together. We have this option that is very easy for the user to select and very easy for us to program because we just need to create an XML to configure our servers, our application. We generate using the GeoServer many formats and also some formats are get directly to the user and the user on the browser and some formats are processed again by our application. In mainly we provide PNG, GIF, JPG for the web browsers together with the image maps. And for use on Google Earth, we generate directly KML and PNG. Some images we do with transparency, some images without transparency to make easier depending on where it will be shown to decide what's better. And in GIS we use the two services, WMS, WFS. Here we have one of the changes on our system. We have the map on the left that shows old system that we have AutoCAD based map for the region. Every time we have to make a cut or change something, we have to do manually everything, check if the bounds are correct, everything would be made manually. We have a lot of problems with this, some errors, time consuming for updating. It's not good. Now we have some maps that are directly from shape files and some data comes directly from database that we can put together. Easy way if you need to change the bounds, the cuts, we just have to re-execute the query. No problem to update and it's faster, easy to decide what you want to over each layer then it's very efficient. Here we have another example that was very difficult to implement in our old system. Now we have some arrows to show the wind direction directly over the rain. The colors are easy to update. If you decide sometimes we need to change the color, just one line of configuration and we can change everything. Just for information, in this new system also we have option to click in these colors and it will blink the same cells in image that are with same color and you can say, oh where is 10 millimeter raining and that's arrow will blink. We have option to use external data like open street map. Sometimes the user wants to see at street level. We don't have to create in our system, we just connect to open street map to use this. It's written without rain. More technical information. I don't know if you are technical or not. We have some things that we have to do that take a lot of work to understand because documentation was not very good but we have implemented our own data store to plug in the gel tools and that store converts our data that comes from the radar that's proprietary from the company that made the radar but they provide the data format and we created the data store to integrate directly on the server. Currently we implemented using the abstract file data store but Jordy said that it's not a good thing because it will be deprecated and we started to migrate to content that data store. It's very easy to migrate. I have done during the workshop yesterday morning because we have all the difficulties was done before. For everyone that wants to make something like this I recommend to use these two links of the documentation that are the data store and also the links with tutorial have the sample how to create a CSV data store from the old method and new method. It's very easy to do this. For quick steps you have to implement a factory, extends the data store and you have to put in a file the name of your class. That's the quick steps to create your own data store. Creating the data store you have also to create the data source but it's on the tutorial you can see. We have two basic implementations. One that reads our database and provides the layers, definition of the layers that we use and also we have the file data store that reads the file and show the vector data. For the JailTools, JailServer to be presented. For doing this it's not easy to restart in JailServer. I recommend to use JMap for testing. You code in JailTools, show in Jframe and if everything works you probably will have no problem putting it to working on JailServer. You have just to put the jars in that directory. Data conversion. We have tried to use Raster and also we have tested with vectors and we decided to use vectors creating a square cell for each point from our radar data that becomes easier to style and show to the users. Then creates a lot of memory use but our data is just a square. We don't have much problems. We have one of things that are difficult that we have some products that comes from the radar that one single file have multiple types of products. Different shapes, different data. Then we created a virtual layer that we read the same file and provides a different schema for that data. We use the WGS84 that is EPSG426 and JailTools do the conversion when necessary. We don't have to mind about the conversion problem, projection problems. What we want to do is difficult to say we will do because sometimes some things changes and we have priorities. I expect to implement the SOS interface, the sensor observation interface and also implement the data store with time support correctly. I have a simple implementation but I think it's not correct. We have to rewrite it. Offer direct WMS services for the external users. Only our internal system users, our internal system uses the WMS and also we are internal users but not for the public because it needs too many machines, computing and we are checking what's the best way to provide this. But we want to provide this kind of service for users because the user will have a better interface can use other programs, local programs instead of just web browsers. Multidimensional, we have seen a workshop that I understand now that ImageMosaic can be used together also for vector and appear to be a good way to integrate with our system. And next time I will get the code, I will try to implement that in our system. I think it's an eye overview what we do and maybe you have some questions that can be used on the time we have here. No, when you have the code, you have for the data, you have basically two options. We use raster, that's just points and when I call vector is like the shapefiles. But vector is internal format. All shapefiles will be managed by vector. As I'm converting my matrix to internal, I use the vector like the format. You say how we go from this to that alerts. That point is manually done because we have many information and we have some problems when we try with some automatic information. Because sometimes we get a sensor that gets a problem. It goes very high, too fast and not real. Sometimes it's real. Then the people that works 24 by 7 in the emerging system check the information, get some of our internal alerts and really the sites. It goes yellow, it goes red. Based on some information that the system provides. But that the images go to public and it's updated one minute by one minute. But just when one person selects the information, it goes to public. We have many things that provide the information and one user decides this. We have the radar uses square points. We have created a routine that makes a simple conversion that defines the central point of the radar. And we go one kilometer, one kilometer and create the square grid. We have tried with different routines and find one that gets better results. We have plugins, grid models and decided by one. That's the difference. We have, let me show what is not really on the presentation, after presentation, just for reference. We have that square and we have that one that you see that's not really square in the same area. We can choose between the projections and decide what's better. We have decided for this square. Okay, we also, we don't use next red. Yes, we use hydrological model. And now we are using a swim model from EPA. And we assimilate data from weather radar and weather stations. And this model, it runs in real time every 10 minutes. It makes a brand to do a forecasting for this forecasting generates a flooding area for the next maximum two hours. So we use the images from weather radar, images, the current image, but also the now casting generated by weather radar. Which can give us one hour in advance with a good reliability. So this information is assimilated by our hydrological model. Okay, more questions? No? Time is over. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
This presentation is about the adoption of the OGC - Open Geospatial Consortium standards in Sao Paulo Flood Alert System which was based on matrix coordinates and static maps.The Flood Alert System has more than 300 telemetric stations composed by rain gauges, water level sensors placed on rivers and reservoirs, water quality sensors, weather stations and a S-band weather radar reaching 240 kilometers of scanning range. The system offers Real Time support for a large metropolitan area and its Emergency Centers, Civil Defense groups, Government, Service companies and general public.We have integrated Geotools (for data conversion), Geoserver (services like WMS, WFS), DB2, OpenStreetMap, uDig, Quantum GIS and some other softwares in our architecture. This set of tools provides many possibilities to easily integrate our data with other systems and external data, like some Hydraulic and Hydrological models that return geospacial data with flooding area forecast and vulnerable buildings.Talking about the architecture, the adoption process, some of the issues, apllied solutions and further development.
|
10.5446/31592 (DOI)
|
from Hustlandia and I'm going to talk to you about the open source tool for surveying applications, especially in the field of water management. So yeah, if you don't know Hustlandia, we are a GIS company, open source GIS company, you can go on our website, you'll know more. That's not the subject of today. Today I'm going to talk to you about first the context of the project we realized, then the exact needs, then all about the GIS part we have developed for this project. I talk about data as well, and then more specifically about surveying, so the hardware part and then the software part for surveying application we developed. I'll have a word about versioning too and then I'll extend to a simulation and the future of the project. So, but the main part will be about surveying hardware and software. So the context of this project is a place in Romania which is called Valcea at Cunti. That's about 300,000 people and actually it's well known because it's called the capital of the hackers. It's a place where all the people are trying to do phishing to your email icons live, so and they have big BMW cars and sunglasses and all, it's pretty wet, but they also do some like normal stuff and there is a European project for the water network development in Cunti and so they got money to improve all the water distribution and waste water. So that's more or less 29 pumping stations, 19 wastewater stations, 1,000 kilometers of distribution pipes and 350 kilometers of waste water area. So there was a strong need for GIS setups, they are quite nothing to organize this project. The project is very global one. It goes from planning to IT to buying land and to buying pipes and actually building the water network distribution. I'll be talking about GIS only, so we worked with the APAVIL company which is the company in charge of water distribution. So the need was a desktop GIS first, they needed the employee to be able to do GIS on a classic desktop machine. They needed a field data model as well for water distribution. They wanted to build a model for that. They needed a centralized data repository as well, surveying tools which is the main focus here. And they had some very specific needs given the constraint of the country, they had to do offline because there is no internet everywhere. So they had to be able to do surveying with offline management. And they wanted also to have version management of the data. So they wanted to be able to do scenarios and to say, okay, if we develop this part, what's going to happen from one version of the data being able to have different branches. I'll talk about that later. Simulation coupling was important as well, so being able to run simulation for water distribution and then on the further needs there was map and web rendering. So we provided them with expertise. We had to buy hardware as for the GIS desktop and for the surveying tools and of course they had to find some software and they were very keen to open source. So they wanted open source software for all of the application, be it for desktop GIS, surveying or map and web rendering as well. So let's present that a bit further. As for the desktop GIS, the choice was quite evident now. You all know QGIS. So that's a project we chose to deploy for desktop application. And we did a lot of project development, I mean QGIS project development in this global project. So that's determining what object we were going to use, what the symbology would be for the different objects, defining composites to be able to print maps and edit some specific forms as well for the user, for the desktop GIS user to be able to enter the data with good ergonomics. We provided some plugin training. You probably know that QGIS is very extensible. So there was a lot of plugin development to adapt QGIS interface to their specific needs. So we provided training as well so that they would be able to develop their own plugin for their very specific needs. So that's for the desktop GIS. Also the data repository, well, PostGIS was a good choice as well, probably the best one. We used version 9.3 and we have a data model designed to do so I'll talk about that just after. We provided training as well on PostGIS so that was everything new for them. And we also used specialized for embedded data, so for as an embedded database and especially for offline management. That's globally the GIS architecture we are using. So QGIS are the client, desktop clients for data management, database management with PG and MIM as well a bit for the database. Then as central data repository on a server with the special database of PostgreSQL and PostGIS and some additional files as well for Waster for example. So that runs on the PC, that runs on the server. And the surveying part here is the one we paid attention to for this project because of the specific constraints that offline was for example. So we had tablets, I'll talk about that. So that's a mobile client, that's a PC tablet and the data had to be embedded as well inside the surveying material and software. So we used the special light and Waster light to embed the data on the hardware. And between the central repository and the specialized embedded database we developed a specific synchronization tool for offline management and versioning. That's a global picture. So I'll talk about this and this part more specifically. First of all, the data model. So we needed a distribution, water distribution data model as well as a wastewater data model. There is no standard, no global standard for water distribution models. So we talked with SCG, which is a Swiss public organization in charge of water distribution and they have a project which is named QWAT which concentrates on a focus on water as well. So we tried to materialize efforts to get the same data model so that our development we could make on our own could also be interesting for them and opposite what they were developing could be used on our data. So the model is pretty complex because you got nodes, pumps, valves, leaks, pipes, station pressures on it, etc. So lots of different objects and relations. But it's a pretty mature data model for water distribution and we are glad to be able to use it and discuss with them on some modification we had to make. As for wastewater, that's still a work in progress. There is a Swiss standard as well so maybe we're going to use that but not for sure yet. So that's the model. As for surveying specifically, so first part was getting good hardware for surveying. So we needed a GPS antenna. We used the Leica Zenos GG03, I'm not a hardware expert so that's not me who chose that but apparently that's top of the top antenna, GPS antenna, pretty expensive too. And they didn't want to use the embedded device which was delivered with the antenna so they chose to buy some PC tablet, PC oriented tablet so that's a rough tablet. You can use it under any conditions. That's Panasonic, that's G1, FZG1. So very good hardware and you can see they are happy to use it. So that's a hardware setup. So Panasonic FZG1, it's got Windows 8 and we needed to stay with Windows because drivers for the antenna are only available for Windows so we couldn't switch to Linux. But happily we could use Windows 7 which was much better on the tablet. So it's high end tablet, 64GB architecture, 4GB RAM, 113GB SSD disk, it's got Wi-Fi, it's got Bluetooth, it's got 4G connection, a 10 inch HD screen, it's got touch and stylus on the drug and it costs quite a lot of money. So that's 2500 euros which is in dollars, I don't know, 3000 dollars or something. So high end hardware, the most important part is that it's the PC architecture. So it's not Android, it's not iOS, it's a Windows tablet so it works exactly like a PC. So the software, we could have used QGIS just like it is just raw on this tablet because this is a Windows one but since it's oriented towards touch and using a stylus, QGIS is really not adapted to that. I don't know if you've ever tried QGIS on touching QGIS menus with your fingers, it doesn't work. So we had to find something else which would be more adapted to surveying and more adapted to our specific constraint and our specific field which is water management. So that's someone working on the tablet and that's the constraint we had for the tablet software. So we wanted NMEA GPS support which is the antenna we have, we wanted domain specific object as well so not like a generic GIS but very specific object like pipes and valves and all. We wanted to be able to digitize directly on the tablet, we wanted to be able to take notes to use the camera to take pictures, we wanted a good guy and good ergonomy especially for touch and stylus and we wanted that to be open source. Comfortable, translatable into Romanian and we wanted like a QGIS compatibility. What does that mean? I mean we wanted all the work done on the desktop to be also usable on the surveying hardware and software. So let's present you Worm. So Worm is a tool made by Nathan Woodrow who works for DMS in Australia. That's an open source application, that's a specific application but it's QGIS powered so it's based on QGIS libraries and it's very different applications, completely separated project but you can use the same rendering and you can use the same project so every project you do in QGIS you can use it with Worm. That was a very good point in terms of materialization of effort, you don't have to read with the work you did on your desktop, you can directly use the work you did on your desktop on your surveying software. It's got raster, vector raster, so same project and same rendering as well, the maps are exactly the same. You have vector raster and you can do online or offline data management. So that's a pretty good project, it's got data in specialites or you can use whatever data you want and a very important point, it's got a project management guide so I'll show you some screen shots just after so that developing a new project for mobile application is really easy. So it looks like that, as you can see it's a very simple interface, it's very oriented towards surveying because you don't have any menu, you don't have any small elements, it's big buttons, very simple features but important features. So that's exactly the rendering like we could see it in Qtis, so we did a Qtis project and then used it with ROM and you can see the main features which are classic GIS features. You have camera use as well, capture for digitizing with a GPS, capture a point for example and you have access to attributes as well so you can select any elements and access to the information which are specific here to pipes, that's in Romanian. Can any of you read Romanian? No, I can't neither, but that's a pipe. So you can enter new data, you have specific forms for the data you want to enter in so it's very clear, it's very big, you don't have any problem to enter your data. You can completely configure the forms so you can have some predefined data so that you know exactly what you are typing. We have a virtual keyboard to help for typing, not for normal keyboard or whatever. You can select the objects here, you created a new object, you can have some different raster layers as a background and you can capture the data. So you can record the GPS and then it will add new elements as soon as you move or you can capture as well points if you want. So that's the application, final application, so you can see it's very simple but it's very efficient in terms of surveying tool. What is good with ROM is that it's not only an application, it's a meta application, you can build an application with it and you got a project management guide with it so that you can define all your preclication inside a guide. So I would say I got a new project, for example a three inspection project, that's a sample project from ROM and you can select a few different configuration options, for example the title, the description, you can select, so you take a QGIS project and then you can select the layers you want to display, you can display logo, all kind of configuration and then you can see the map as the final application will render it. You can as well completely configure the forms you're going to have available for your user so you can say I want a field which is of type list for example, I want specific values for this field, etc. So every kind of form is configurable and you have a preview and you generate actually the project which I showed you earlier, so it's very easy, we actually set up the first project for the client showing him how you do that and then we said okay you got the project management tool and they built their own project, they are on application within a few weeks without problems so it was very good. So one point was versioning as well, so we had strong project requirements, we wanted data history, we wanted offline work and we wanted scenario management so that's all together makes like data management, data versioning needs and we actually built a new project called QGIS versioning which is based on post-GIS and spatial light and which is kind of a subversion for the database so with the concept of commits, branches and with a conflict management tool. So the principle is you get a working copy, you get an offline copy of your data, you get it on your tablet, tablet hardware and you can work on it and then get it back to the central database and you can as well do working copy inside the database without having to put it on your tablet so you can work directly everything inside the database. Next the workflow to update the data, so you got the reference repository which is the main repository for data. First of all to create a working copy for your user here so you got a local database then you do modification offline and third step is to do a working copy update whenever there is conflict you have to deal with conflict and merge your modifications with the modification which have been done meanwhile by the user B and then once everything is alright you can commit the modification to the reference repository and the user B can do exactly the same in between. So we got versioning and we got offline and we got some conflict management tool. So every time you commit your data you're going to say commit message and then it say okay it's you successfully committed your modification to the database or it says you have some updates to do before. We got conflict management, typically we got here data which has been modified on both sides so inside QGIS you have an interface where you can choose which data you want to keep or you can do modification of data. First point was about simulation that's not exactly surveying that's an extension of the water project globally. We integrated epanet which is a hydraulic simulation where you can do simulation on the water networks to know what pressure will be on what pipes etc. And we integrated that into the geo processing interface at QGIS. So it's a plugin and you can do time plots for example of the water level of tank. You get map of results with specific pressure points with indication on the pressure level for different pipes and all. So we are going to do the same with SWMM which is a code for simulation for wastewater. So for the future of the project so that was what was developed. The future of the project is to have better domain specific objects so there is a lot of QGIS guy mainly to do to better integrate some water management tools. We want to improve the surveying software as well so GPS tracking has been done, capture data on canvas has been done already as well but the ROM application can be now extended with plugins so you can add some new features to the surveying application. And then there is some more global improvement for SCADA connection for example to control the valves, remote, geo web services, sensor oriented services and globally to have an integrated software stack for water management, water distribution. And try to build a community as well so every people interested in water distribution software open source based on QGIS, PODGIS are invited to contribute to use it and then to fund it if you can. So that's mostly it for this presentation. If you have any question do not hesitate I think we have some time for questions. Thanks. Questions? No question? Or that's yeah. Do you know if there are some applications for park and green space management? Parks and green space management in the same way? So I don't know if there is any actually live at least not publicly probably there are some people who developed some application for park management in QGIS but they didn't publish it or didn't try to get others to use it. But if you already got QGIS and QGIS project and all if you want to do some surveying this ROM tool is really easy to set up. I mean within a few days you can have your application running and if you don't need any real complicated synchronization it's pretty easy to set up. I was wondering if the software for doing the synchronization between QGIS and SQLite is available? Yeah it's available. It's on GitHub. If you go to Iceland it's GitHub. It's still bit rough so it works in this case. We haven't deployed it in other projects than this one. But you can download it, use it and it should work. So it's open source as well. I have another question too about composers in QGIS. It seems like when I've used them that produces very large PDF files. Did you get into that problem? Well it depends also. You have two modes for exporting composers. You have vector mode or you have raster mode so you have to check that you are in vector mode for exporting and sometimes some layers cannot be exported as vector so the whole composer export is converted to raster. So that may be the case. And we are actually working as well on improving the SVGA export so that we can have some like real good SVGA export from QGIS if you want to go to Illustrator or something else and to have better editing for vector, big vector maps. Great, thank you. One more question, last one? No more question, perfect. Thank you.
|
It became possible lately to deploy a full OpenSource application stack for field surveying. This presentation describes a water distribution and waste-water management project from a technical point of view, with a strong integration of mobile tools within an industrialized GIS.This projects features a GIS part, with a centralized reference data storage leveraging PostgreSQL/PostGIS, and uses QGIS as a user interface. This combination allows to manage custom data with high volumes efficiently. The project also includes an important mobile side. Implemented on a rugged tablet, a custom tool has been setup to capture and enrich field data. The software is based on ROAM, a new OpenSource software designed for field survey. The tablet is connected on a 3G/4G network and takes advantage of a GNSS antenna to increase GPS precision. It also features an autonomous offline data management module, so as to be able to work in bad network access conditions. The tablet also embeds all required data for greater efficiency. One specificity of this project is the implementation of a synchronization tool between the data used in mobile situation and the reference data, in a multi-user environment.This synchronization tool, developed with PostGIS and SpatiaLite, let users manage data history, data modifications, data merges, offline mode, as well as branches, for parallel versions of the same data. The latter enables the design of evolution scenarios of the network. A classic issue of the surveying work in mobile situation is therefore solved, being able to work in a disconnected mode with multiple land surveying teams smoothly, while keeping data traceability.The project currently evolves towards water simulation integration, interconnection with SCADA industrial systems, and sensor data automated integration (through webservices).All These components therefore constitute a full software package, fully opensource. The various components can be used for other applications than water management. The new features developed thanks to this project can solve mobile GIS issues, and optimize the TCO of GIS solutions for industrial projects, for real-world critical applications.
|
10.5446/31594 (DOI)
|
Welcome everybody. I'm Jeroen Tegeler, founder of GeoNetwork open source and director of Geocut, a small company. We have a stand downstairs. If you want to come and talk to us, what do we do? You can find us at our upstairs, actually, at a place where food is served. I'll talk about GeoNetwork 3 and a bit around GeoNetwork 3. I'm going to rush through the slides, but they're mostly graphic. So, just to start, we all think that describing and finding data is so easy, or it should be so easy. And maybe that's because we have Google. And so, when you type in Google something, you usually find your first one, and maybe something you need. But if you start looking for geographic data, and I've done this just some random thing, type a search on geographic data, forests in Ivory Coast, and there's a lot of forests in Ivory Coast, you actually get 13.8 million results. So, you see, it's really easy to find a lot of data. You can do the same and find pencils. And it's even worse, you get 33 million type of pencil results. So, we're kind of having an information overload, and you need to see how we survive in there. So, one of the things you can do is have a shop and filter. For instance, in Amazon, on types of pencils, and you have all kind of filter options, and you get to the specific box of pencils you want to buy and use. Now, all magic, we have this for GeoNetwork. Ta-da! We're Superman, and we do all of this. We're joking. We made something, and it's kind of a shop, but then for GeoData. It does everything, and we are convinced that it does everything. And still, we have problems with people that can't find data. I think it's complicated. We think that this system will help you to set up whatever catalog you need and do everything you need. You can share data available for a project, for a department that you work in, for a municipality, maybe a county or province or a state, for your whole country, for a continent, for the world, even beyond that, some planets and so on. And it's actually been used for that, and still, there's a problem. And I think that is the danger in expectations we have that by just running these systems, and building a software like GeoNetwork, we can kind of fix all those data-searching issues for you. Now, there's one really good thing, and that's this group of people. They are all core GeoNetwork developers. They come together every year, and they do amazing work developing GeoNetwork to what it is now. And it's not enough. Luckily, we have you. You, the audience, the end users are the people that give us most of the feedback that we need to improve the system. So, over time, we've had many versions of GeoNetwork, and not all of them as successful as we wanted, and still quite successful. So, it depends a bit on your perspective, but we feel that we should improve further. And so, what has happened over the last one and a half year, maybe two years, is a complete re-implementation of the user interfaces. This uses the latest frameworks that people use to build user interfaces in the web, like Angular, JavaScript, D3, and Bootstrap, and the concept of what is called responsive design. So, once you develop an application, a user interface, which is basically the front end to the GeoNetwork backend system, you get an application that will work on the desktop, but also works on the mobile phone. And so, at some point, you can even start describing datasets you have while you're on the bus and going to work, because you really feel like you want to create metadata. So, what did we do first? Who's using my data? This is an important thing for the person actually describing the data. You can invest a lot of time in describing, and if you have no clue what your end users actually look for, it kind of is a system that you start feeling, but there's no feedback. So, the same actually for your organization. You want to know what data you actually have. Up to now, that's been quite a big question mark, and so we've tried with statistics to get more and more out. But what has happened at first with implementing a new user interface was a focus on the administration console. So, the whole system settings and everything you can get out of that system. And there's a couple of things in there that are important, like statistics for data, for searches, for the content of your catalog, the state of your system. Things like harvesting, having reports on what data was actually collected from another catalog. So, there's basically a lot of panels that have been implemented in GeoNetwork. So, what will be GeoNetwork 3 that provide you with statistics on data, on content, on system behavior. So, are my indexes up to date? How often is my CSW actually searched? What kind of, well, this is all very small, but I just, it's an illustration. You get the option to see what terms were searched, what kind of services were used most. So, is my catalog service for the web accessed a lot? Or is it actually an open search? Or is it an atom feed or something? So, you know where you can spend more time on improving. So, search statistics, you can define the time range you want to see those statistics on. There's a lot of content statistics. So, the content in my catalog, what type of keywords have been used a lot? What type of services are available? And how often are they used? Another quite good improvement is on the harvesters. So, harvesters are basically from a GeoNetwork catalog, connection to another catalog. Maybe a library, maybe another CSW catalog. And you can kind of make, collect collections of metadata coming from other systems. But maybe also from a GeoServer service or from a local file system. And you can grab all these metadata into your system automatically with a scheduler. And so, here this is a setup of one of those schedules. And you can have multiple harvest sessions going on in parallel. And you schedule how often that remote catalog should be connected to. You then get reports on what was added, what was deleted over time. And you can see for every harvest is a report generated. How much new data came in, what was deleted and so forth. There's a lot of other screens I could show on the admin module. But I don't think it makes a lot of sense to show all of those. But I guarantee there's really a lot of very nice work on the administration interface. So, the second big thing that was worked on then was another part that is not very visible for end users. But it's visible for the people that actually describe data. And it's the editor. This is kind of the piece in the GeoNetwork software that has most of the pain. At least the experience from the people that have to contribute or to maintain those metadata. Metadata editing is a complex thing and I'll get a bit more into detail later on that. But a lot of improvements have been made on the editor. And the other very nice thing of this new editor implementation is that it becomes really easy for software developers to customize the editor. So they can really quite quickly change what fields are displayed, which ones are hidden. There's a lot of interactive feedback from the system when you create metadata records. And here's how you create a new record. So first you have an overview of what metadata you actually have as a user so you can quickly find your own records and start editing them or improving them. But then when you want to create a new one, you have the option here to select what type of data you want to describe. So it could be a data set, it could be a feature catalog, maybe a map or a map service. And then the next step, so that's the left column, then the next step is to actually select a template for that particular data you want to describe. This can be based on metadata profiles, for instance a North America profile or a Marine profile or an Ansleak profile or an Inspire profile, etc. You choose that and then you select what group that metadata should be part of. And you create an empty record that possibly has a lot of pre-filled information already in there. The form then looks rather straightforward form. What you see here is a multilingual one with English and French language is supported. We only see English at this point, but all the translatable parts could be changed and looked at in French. And here's a bit more of a close-up of a field like that. What you see is you have all the languages underneath. And you select what language you want to type in and then you see that language. You can see this in this simple form, so just one field, but you also have the option to have a screen where we quite fool, but to have a view on all the languages at the same time. So here you have, for instance, Arabic, I don't know, Chinese, French, everyone listed under the other. And so you can start doing translations throughout your record. So you have the option to look at the metadata quite simplistically or a bit more complex. And it's always still a form, so it can be a bit overwhelming from time to time. So then there's the option to add all kind of resources to that metadata record. For instance, a zip.jfile or a document, a PDF, whatever you think you want to add, or maybe links to services, web services, links to other metadata records. So there's a lot of thumbnails, et cetera. There's a lot of resources you can link to that particular metadata record. So if you upload a shapefile or a geotiff, geonetwork now is another option. You can then immediately deploy such a shapefile as a web service. So from the same metadata editor, this is the right column with the resources listed at the top. So in this case, a zip.jfile, you can select a shapefile and deploy it on a geoserver instance that you configured in your admin console as a geoserver that you can use. So you deploy it. Then you have the option to style that layer using the style I provided by geoserver and create the links in your metadata as well as on the server. There's connection between those metadata records and the web service as well as the other way around. You can then also generate thumbnails from the WMS layer. So it just connects with a small open layers window. You set the scale you want to see this mapping. You move the map around and you create the thumbnail. The other option is just to upload a particular image you have in your desktop or link to some URL you have somewhere. So that's all metadata editing. And it's basically all ready to go but still needs testing. Geonetwork 3 is not out yet. It's still under development. So then the third and last part that is under development now is, and you probably expect that, is the search interface. So currently the search interface was implemented in XJavaScript. It's complex to maintain. It's very, very particular in fine tuning and making sure everything looks good. So we've quite quickly actually in the project decided to abandon that technology and we've abandoned it for the editor. We'll now also abandon it for the front end and replace it again with a similar interactive search as you've seen in the editor and the admin console. So this is the very first screenshot. They're kind of a replica of the existing user interface. Although we are discussing how to further improve that user interface, there's many, many different perspectives you can have on the metadata catalog. So this is kind of the text-oriented view on a catalog with filters and so on. There's not a lot of new functions in here but it's much more interactive. It's much more responsive. You change something. You type something. You change the changes in the results and so on almost instantly. So it's nice. It scales nice on a tablet. You suddenly results appear under the search form. Everything works nicely in that scaled environment. So another quick screenshot with kind of a text view and adding keywords. We'll filter down to particular areas. And then there's a map viewer component. And the map viewer currently looks like this in release 210. Again, it's always a kind of a... doesn't really work nice. The experience has always been not very nice, especially with the X JavaScript. So replacing that with a map viewer now, we're actually focusing on the completely full-screen map viewer. It will really be covering the whole window. This is... there's nothing more on the page than this map. But there's a search box on the left. So you can start searching and it will look into a gazetteer, but it will also look in the catalog. And then when you find layers, you can instantly add the layers on the map. And then you can also start inspecting those. So things like legend. You can see... typed in some information in the search box and you instantly get feedback of potential results you want to see. And then you have also the option to inspect metadata with that particular layer. So there's not really a switch between the textual search view and the map view. There are more like two type of interfaces on the same catalog. Although you will, if you do the search in the text view and you click on an add map or view map, it will add it to this viewer. So again, like in previous versions, there's a print map option. All not super fancy, but nice to have. So I wanted to finish this presentation with a couple of slides to discuss metadata editing anyway. Because I think we're not there yet. And I think we still need stuff to help. So we have to make the describing of data more attractive. And I've been discussing this a couple of times in presentations. Let's see if it's kind of an evolution of that thought. Currently, the common perspective of data description is what we have in the system. I've made screenshots of the full browser window. So this is not what you usually see only the very top part. And then scroll down. And so when in geo network, you have the simple view, this is not the editor, but just view on the data. And the whole page for a single record is that long. If you look with the inspire view, it's even longer. It's a really long list of properties. And if you would look in an XML view, it would even be three times as long. And it's completely impossible to understand what this is all about. So this is scary stuff. And I can see that people are overwhelmed with metadata records and don't want to deal with it. But still it's important because then you know at least what you can use. But I feel that there's a need for kind of a wizard perspective on that, at least on the editing, the creating of the metadata. And I've kind of promoted them. I'm not a shareholder, but the LinkedIn approach. They have the option where you fill in small parts of your profile and then slowly invite you to add more content to your profile. And it kind of works on the greedy, I don't know, the human will to complete things. And so they just do things like this, recommend it for you, add your publications, add the honors and awards you had, you ever won. Test scores, whatever. I think we can do the same with a metadata record. So I think if we provide proper suggestions to users, we start with a very, very simple form where you just have title and abstract and maybe your data upload. And we read some of the properties of your data instantly on the server. Then we can start asking about data quality or about licenses and so on. Those are things that, you know, if you do them on day two or three of your work, they're still fine. So slowly you could get to metadata records that kind of approach completeness. And, yeah, it kind of makes you feel good if things start or you want to actually have at least a 70% complete record. So it kind of pushes you to describe your data better. That's an idea I have. I think if you didn't start today, it won't be finished tomorrow. So at least start with small descriptions of your data and slowly add over time. But we definitely need your input. So if you have ideas on how to improve this, how to contribute to concepts that make the description or ways that make it easier for people to make descriptions of the data, you're more than welcome. And we can make this the best geographic metadata catalog in the world and really be a hero. So thank you. Questions? Yes? Yeah? I think the wizard is a great idea. I think inputting data can be a laborious task. I think the wizard is a great idea because inputting data can be a laborious task. But just a couple of questions. Is it possible for me to create my own custom metadata schema? Yes. Okay. It's easy to do? Well, for me it's impossible, but there's software developers that do that quite quickly. So I guess depending on your expertise, it's easier, it's complicated. But yeah, there's a lot of examples. So there's a lot of different metadata profiles you can look at. Yeah, so profiles. We're actually working on migrating all these profiles into a separate GitHub repository where we can have experts on metadata profiles start maintaining their own profiles, versioning and then making sure that that is a complete package that can be loaded as a plugin to the system. So the plugin support is there, but we need experts to start contributing to the plugins. And yeah, from this new metadata repository, which is called metadata 101, we could have responsible people become committers on such a thing and keep versions and so on of the profiles. Is it easy for somebody else to harvest the metadata on my GeoNetwork instance? Yeah, it's very easy. And there's a lot of different catalog interfaces to be used so you can choose whichever mean is easy for that other person to interrupt as well. So with respect to how to encourage people to contribute their metadata. Is it on the mic? Maybe I just need to get closer. Okay, is to, you know, one of the things we've actually encountered this on a similar internal project and then that is to, it allows people to vote up or vote down certain data sets and maps sets. And so the idea there being that it, you're appealing to a person or an individual or a group's ego in a sense that by virtue of they can attain a certain status within the collective community by attaining certain numbers of points or so many points, which they can only do with certain degrees of completion of the respective data sets. And that seems to have worked pretty good because certain individuals kind of take pride in their status within their community. So knowing that they can attain an expert in a particular area seems to work well. That's great. So we had this very minimalistic kind of voting thing in geo network. If you have something that is more sophisticated than works better, I really invite you to try to contribute that back to the system because I think it's really good stuff to have in there. Do you have a projected timeline for the 3.0 release? Yeah, so the 3.0, the user interface, the search interface development is taking place for SwissTOPO in particular and a couple of other projects that want to contribute to that. And I think it should be finished by the end of this year, even in November. Then it will take some time to stabilize the software and that depends a lot on how many people test and help fix issues. But my hope is that by the end of the year or maybe early next year it should be finished. So it still takes time. We have had a lot of discussion, even yesterday and last week, if we had to make kind of an intermediate release where there was the admin and the editor stuff already available. And it's kind of complicated for us because we also as companies we provide support. And so supporting things that are kind of half baked ready and not becomes tricky. So yeah, we're kind of from the commercial perspective hoping that we make one version that has all the interfaces changed. But we have, I think today or yesterday, made a minor fixed release of 210 and the administration console is actually already part of 210. So 210.4, you'll find the admin stuff. Yes. Is it on? Yes. I was curious, how are you handling search? Are you searching all the metadata or just selected fields? Is that something that's configurable? Yeah, so very much depends. What are you using behind the scenes for the search indexing? Yeah, so there's a Lousine used underneath and it's or using specific fields. So there's a lot of specific fields you could search in or there's the any text and it basically looks through the whole record. So it depends very much on the search criteria you set in the search interface. What fields are used and the same for the CSW search. Yeah, you can configure what parameters are searchable in your CSW. So some CSW catalogs will have a lot of things that could be queried specifically and others just restrict this to more limited fields. Yeah. Any other questions? Are you also using Lousine for the GSPACIAL search as well or just? Yeah. Well, yes and no. In the bounding box, I think yes and the moment you start doing more complex spatial queries, it's using geo tools with a shapefile underneath or possibly a post GIS underneath. So it depends on the type of query. Yeah. Again, also on that side, there's I think still improvements to be done for particular geo spatial searches. I haven't touched on open data on link data, which is all pity. I haven't touched on the different type of CSW service you can set up, but there's a lot of that as well that I could have presented. But if you have more questions than there's a couple of us here or you can come to the GeoCut booth and discuss further. We're here to discuss the project. Okay. Thank you.
|
The presentation will provide an insight of the new functionality available in the latest release of the software. Publishing and managing spatial metadata using GeoNetwork opensource has become main stream in many Spatial Data Infrastructures. GeoNetwork opensource 3.0 comes with a new, clean user interface based on AngularJS, Bootstrap and D3. Other topics presented are related to performance, scalability, usability, workflow, metadata profile plugins and catalogue services compliance. Examples of implementations of the software will also be given, highlighting several national European SDI portals as well as work for Environment Canada and the collaboration with the OpenGeoPortal project.
|
10.5446/31597 (DOI)
|
Hi, hello everyone. My name is Jae-won Kwon and I work at Gaia 3D based in South Korea. I really appreciate that I have a chance to deliver a presentation at first project, this project was mainly managed by BJ Jang, one of OISGOR's chapter member in South Korea, currently used at Korean Meteorology Agency. This title is the Big Size Meteorological Data Processing and Mobile Displaying System Using Post-GIS and Geo-Server. Let's go. Generally speaking, using Post-GIS without tuning could lower the performance and quality of service. Tuning according to the situation or environment makes the service better. I will speak about the experience that our team successfully launched the weather chart service at KMA by using several tuning skills based on situations. First, background. KMA's weather chart service is processed like this. Oh, sorry. On top of that, collected observation data is modeled and converted into GRIP data, which is registered data. After that, vector weather charts are generated, throw out, vectorized and weather chart for service is service throw out, image application. If you look at the service architecture here, KMA data first are inserted into Post-GIS and data in Post-GIS is linked to Geo-Server for the service using HTML5 open layers and J-Curry mobile. Weather data are mostly low resolution images such as 5 km by 5 km. Also, there are somewhat different concepts such as isobaric surface and various kind of analysis models and observation time and date. Furthermore, different from general GIS data, frequency of data generation is quite often like from a few times to several hundred times per day. Lastly, weather data should always be up to date. This system uses six special tables to handle weather data generated per time per day. From this data, more than 5,000 weather charts are generated, which is about 35 GB and 67 million columns. Gigantic amount of data is collected, generated, processed and detected every day. Number of special data column is more than the number of sub-Korean population. It's very many things. I will go over some problems that we had to overcome. Due to the size of weather data, there could be three problems. Firstly, it takes too much time to correct data. Secondly, data is not properly managed because lots of data is accumulated every day. Lastly, it takes too much time to search data. In all, there are all around problems regarding database. The reason to happen this problem is lack of characteristics and situation of weather data. At the beginning, level of system development, people simply thought putting data into post-GIS and service using Geo-Server without understanding characteristics of data. However, weather data is quite unique, so customizing should be required before development of the system. More specifically, it usually took 5 hours to insert all data to generate weather charts and the size of data gross 35 GB every day. Also, it took more than 10 seconds to search a single weather chart on mobile devices. Based on this situation, we built our own goal. We want to improve the system like this, increasing inserting all data within 30 minutes using add batch and execute batch, and keeping the size of data file picks using partitioning and truncate. And lastly, searching a weather chart reading a few seconds by improvement on index. And then, we have improvement on importing speed for big size data. There was a big difference of speed according to how to import data. It took more than 24 hours to import one by one of 67 billion data. However, it is shortened if data is imported at a time. Graph on the right side shows the result of the test and it says, edit. Shortens more than several hundredth times of time when inserting after gathering 3,000th of data. One weather chart KML file has about 3,000 scrims of data and we took an import speed test using this data. When importing one by one, it took 109 seconds. But as you can see here, using add batch after gathering data service saves huge amount of time. It took 8.9 seconds to do execute batch after 100 times of add batch and it took only 1.1 seconds after 3,000 seconds times of add batch. This definitely shows that how to import makes huge difference. Second text is about how to keep the data file size stable, which originally grows 35 gigabytes per day. The problem of post-gis or difference from other DBMS is post-gis is right once type. When updating or deleting, your data is removed from database. There are some marks on deleting without removing, so execution is fast and versioning is possible. However, this makes the size of database extremely large, sometimes making the system down due to slowdown of performance. Also, weather data increased by 35 gigabytes per day and we should solve this problem. I go over more in detail. On Oracle, if we need data called B, B looks as B apostrophe. B goes into snapshot and disappears after completing transaction. However, on post-gis, it keeps record before we need and adding A record after we need. This makes the data increase whenever transaction. On post-gis feature, post-gis provides a function called vacuum. Like real vacuum, this function plays a role to observe data and automatically execute it. As you can see here in this slide, B and CR renewed B apostrophe and E is deleted. On post-gis, B, C and E data are remained where they originally are, but if vacuum is executed, B and C and E data is moved to CR memory called FSM. However, in case the data is too busy, this way could even increase the data file even though vacuum is executed. Different from general vacuum, using the function called full vacuum can decrease the file size. Like this image, it releases useless space and data and push all data back to empty spaces. As a result of applying this function into three days of KMA data, it took 50 hours. Furthermore, during full vacuum, database is locked and can't do anything. So, full vacuum is not a good option. So, we try to partitioning. Partitioning is a function provided by database and used when data size is used. To use partitioning, we divided weather chart into seven by days from one day to Sunday. To put, we defined tables by days to insert and also defined tables by days to delete it. For example, on Sunday, data is inserted into table zero and data on table four is deleted. On our seven table, data is inserted into three tables and repeated same works every day. At this time, in order to delete data, it takes less than one second to use truncate instead of vacuum. It blows away in the blank. As a result, we could easily and quickly keep all data less than M minus one day. Lastly, improvement on increase speed by resetting index. Improvement flow of increase speed is first data condition, analysis check, and quality finding, and quality plan, analysis check, index improvement. Sorry. It's important to grasp out number of loss of tables for data condition analysis. It takes too much time to generally use select count as a straight, a straight. However, using statistical tab as PG class, it can be quickly recognized. If executing the call right above on the right side, the results are chose like the below on the right side within a few seconds. Next, I move to other subject which is a little bit away from database. I use SQL view in conjunction with post-js and geosurfer. For easily managing layer, not just to use a layer, it can manage a specific SQL query when data source is Geodb, like Oracle Special, Post-js, and SD. This system consists of six tables, so query are forced to complex, but can manage easily by creating a SQL view. In addition, in the process of making a SQL view, it can operate another work like a reprojection. So by the operation in the DB machine, it is possible to obtain results more quickly than in the geosurfer or client. Such a in the right side query can be managed as a layer, having the same properties as a result of down in the right side. In the next page, how to use this query in the right side? Actually, executed SQL statement should be found to improve in call speed. For doing this, we found actually executing SQL statement internally using static curve table as PG state activity and grabs out how much time it took. Up in the right side is the result of query execution of down in the left side. Currently, executing query is shown and it shows when it started, how much time it took, and slow query. The query down in the right side is a little bit complicated and the sample of internally executing one registered on SQL view. You can see SQL view registered previous page have a long lower in query as table. Oracross on our list function is consist of first professional functions and basic runs doesn't have professional functions. However, first GIS provides command line on our list function on its basic installation and PG admin 3, which is UI tool of first GIS, also has that function. After clicking SQL, if clicking is playing analyze button on quality, you can graphically see what's going on query and also can see that result when executing is playing analyze command. Using this result, query can be analyzed. We should consider how to set the index based on previous analysis result when improving index. First of all, index reader with all columns is set on well-closed. Here, special column has some what index type, so it should be set separately. Next, column with large of data type should come first. Like this, it is world effective to filter out some using filtering. Also, items including same operator like equal should come first, which is more efficient than filtering out items such as smaller, larger and large. Lastly, unnecessary index should be removed due to bad performance on inserting. As a result, data capacity is decreased by 20% due to index creation and increase speed has increased by 6 times to 25 times. It seems that the bigger table size, the better performance. Lastly, improvement result. This demo process under 300 isobaric and temperature isokinetic map and ground with number temperature. Last is 800 isobaric and mixture ratio and temperature. Isobaric means same astrophy line section in error. Let's go to real site in Korea. This is KMS mobile weather charts website. It consists of three, three temas. First is data, methodology data type. We selected gdabs. So we can select isobaric. I selected 925 isobaric. So another data type is appears under the category. We select temperature and humanate. So this weather chart is serviced in a few seconds. Then we can write this weather chart is showing in this site. As I previously said, we have some result by tuning according to three scenarios. We just found out that appropriate conditions for system is very important to insert data and we improved 100 times performance. Secondly, we keep the data file size as m minus 1 according to data by mixing partitioning and truncate. We have about 20 times improvement in quality time. In conclusion, post-GIS is real great DBMS and the performance is never lower than other DBMS. Plus, it is perfectly suited with just server so availability is high. However, better performance will be granted after tuning with perfect understanding of the features. Thank you for listening to my presentation. If you have a question, my manager is there. So, please send him. He knows everything in this presentation. Thank you.
|
Gaia3D has developed meteorological data mobile web service using PostgresSQL and GeoServer for weather forecaster in Korea Meteorological Administration(KMA).This system displays weather charts, weather prediction information, weather images, observation data on mobile web for rapid decision of weather forecasters when they are out of office or in remote environment. I will deliver a presentation about experience to develop and launch mobile web service showing weather charts by tuning daily updated big spatial data in terms of database.Weather charts generated by this system are displayed using OpenLayers Mobile after inserting big size vector data into PostGIS and rendering these data by GeoServer. This system processes 67 million lines of spatial data(approximately 35GB) and generates more than five thousands weather charts everyday.On previous system, it took five hours to insert data into PostGIS and took tens of seconds to publish single weather charts by GeoServer. Also, there was another problem that file size on PostGIS has unlimitedly increased.Gaia3D decided to fix the problems and improve this system in terms of data input, data management, and data display. Consquently, the performance of data input has increased about hundred times and the performance of data display has increased about two hundred times. Finally, KMA could successfully and stablely manage the system without increase of file size of three days data.(This system shows data up to previous 72 hours.)
|
10.5446/31598 (DOI)
|
Good morning, everybody. Thanks for coming out. I know there's a lot of different sessions going on at the same time. There's lots of places to be and I really appreciate your attendance here. And I'm looking forward to telling you about some new Python software I've been writing. There's going to be some code in the slides and I haven't exactly mastered my presentation system. So the code samples might be a little small. You may want to jump up a row if there's a seat in front of you. I think it's going to be visible in the back. But just barely. Right? Maybe, you know, in the keynote I was kind of squinting at some of the slides there and I've done the same thing. So apologies. So my name is Sean Gillis. I've been coming to these phosphor G conferences since back when it was the Map Server users meeting. And I'm working for MapBox now. I'm going to talk a little bit about the software I've been writing there, some of the motivations for writing new software at MapBox. And it's kind of cool to follow up on Mike Bostock's talk because I think about some of the same things when I design software if nowhere near as deeply as Mike does. So I think I'll kind of reinforce some of the take advantage of his talk to talk about some of the same issues. All right. So the, is that nice and center? Right. I've, the new Python package that I'm talking about here are Fiona and Rostirio as we call it at MapBox. And these are new Python interfaces to the OGR and JITL libraries. I think the distinctive things about these, this software is that they are native GeoJSON speakers, right? So that's kind of their native interface for GIS feature data. They each embrace the good parts of Python. I mean, if, who here is seen or read JavaScript the good parts, right? So Python has good parts too. And they're, they're not the same good parts as JavaScript. And they're not the same good parts as C. And they're not the same good parts as other languages. So Python has unique strengths. And this software is written to take advantage of the unique strengths of Python. And each of these software embraces the command line. I'm going to show some of the command line tools that I'm developing with these and how we're using them at MapBox in production. So I look at this as being in software development. I look at it as being a double challenge for myself. I want to help experience Python programmers, get comfortable and familiar with GIS concepts. And be able to take advantage of data and take advantage of concepts like projections and what not to do work. So get them up to speed. The people I work with on at MapBox are we're hiring people that are kind of new to GIS, right? They're programmers first, they're GIS people second. And then I also want to help GIS experts learn to be better Python programmers. How many people here are not yet Python programmers? All right. But you know, right now I see lists going around of things, essential things that GIS professionals needs to know these days. And if Python isn't at the top of the list, it's, you know, top three, something like that. So there's, I would like the software I'm writing. I want it to be software that teaches people good Python practices and increases the general quality of Python code that's out there. That's one of my other goals. A lot of this work is coming out of a project at MapBox to make a cloudless mosaic of several different data sets. That's kind of the, that's the work context for the software I'm developing. And since, since this problem has been tackled before and I've actually done it, done similar things at other jobs, but we're doing some different approaches. We want to do it cheaply. We want to do it fast. We want to do it without tons of staff. And off the shelf solutions for this is our scarce. And trial and error isn't our only method, but it's still, it's a very important method. So we need new software to do this kind of work. And the requirements for the new software is that we need to be able to have, we need to be able to iterate rapidly, right? We need to be able to try things, fail fast, move on. We want to use scientific software. We want robust algorithms, dependable algorithms, reproducible results. We have to be able to scale out to many, many nodes in processing. And with those requirements, I'm going to talk about how we kind of arrived at some of our software choices. I mean, these are, these are a little, probably fairly obvious to a lot of us that the first and third of those argue for open source. If we go back. For the scalability, using open source software means not having to ask for permission or pay an invoice to roll out new software to servers other than say the servers are requiring, requiring. The first of those requirements argues for a high level language with multi-dimensional array syntax. And the third is arguing for scientific software like Laypack and then proven raster format drivers in Google. So the fit for us is Linux and Python, NumPy and the scientific Python stack. Now there's, there are existing JITL Python bindings. And I think we're probably all pretty familiar with these. I'm going to explain, try to explain why we're getting away from using these and why we're writing new software. These, you know, they have service well. They don't actually fit very well in my opinion with the good parts of Python. So I think we can do a little bit better. So what I mean by the good parts of Python, this is, this is my first ever Venn diagram that I've actually published on the web. Okay? So bear with me. This is, this is a diagram of the unique good features of three common languages. So this is the good parts of each of these languages. And it's not exhaustive. I'm leaving out good parts. We can debate whether I got these exactly right and I hope we will at some point. So I think it's going to be illustrative though. Does anybody recognize what the purple language is? Yes. And the yellow language that I'm alluding to, that would be C. And then Python's the blue language. All right. So what you get, JITL's existing bindings are generated by a program called the Simple Rapper Interface Generator. And this is a tool for getting multi-language bindings from C++ code. And what you get out of the end of it is C++ like interfaces for your languages. So you tend to get, you get an API that emphasizes, if not this, right, speed is outside of Python, pretty much. But you tend to get the kind of overlap between the language you're wrapping and C++. So in this case, you get lots of numbers. You get things like common syntax like for and while and you get operators, you get plus and minus and then this and that. But SWIG doesn't really give you, it doesn't give you access to the good parts of Python, which are up in that left corner. So as an example here, this is from the rather excellent JITL and OGR cookbook. So this is kind of a canonical example of creating a geometric object with OGR. It looks a lot like C, right? There's kind of, there's methods to add things. It's very, it's very imperative kind of, you know, add a ring or like create a ring, add a point, add a point, add a point, add a point, add a point. Okay. So the classic OSGO, OGR geometry object. And I think this can be tightened up a little. I think that a different API lets programmers that already know Python well get into this a little bit faster than this type of API. So I have an analogy here to Python list construction. At the top, I'm showing, and I apologize if you're not seeing this well, at the top I'm showing the most imperative C-like way of making a Python list of the first 100 numbers. Okay. So you create a, create an empty list and then you iterate over the numbers and then for each of those, you append that item onto the list. Right. So that's the, the, the probably the most C-like Python you could probably go right. And then after that is the idiomatically Python way of doing the same operation. So this is the kind of, this is what I'm looking for in new Python spatial software. Less of the, less of the top, more of the bottom. Less code, more Python idioms. And it's not just about less code, it's about speed too. So the Python, the core Python developers have made idiomatic Python run faster. Right. It runs faster than the imperative code. So, and it's actually in this case ridiculously faster. If you, I've used time it to time the first one and what, it's showing like a thousand times faster. Right. So yeah, the idiomatic Python code is much, much, much, much faster. So I think we should do, we should provide APIs like this in Python packages. You know, experience programmers or are coming in wanting to find stuff like the bottom. And it should be there in our spatial libraries. And then I'm taking it a little bit further too. I say, well, do we always really need to create fully featured geometry objects with hundreds of geometric operations on them just to ship data around in our applications? How about using Python literal dict syntax to carry, to carry geometries in our applications? So this is the, this is my, my take on that OGR code. It's the same geometry, at least in its data. And I benchmark that as well with time it. And this is, for example, this is 30 times faster than the OGR code. Does anybody recognize that kind of, that kind of geometry representation? Right. That's GEOs. Right. And then ideally, I think we'd like to, or I would especially in my applications would like to pay only for what I eat. So if I'm just shipping data around, I can ship it around as a date. If I don't need operations on the thing. If I do need operations, then there are other libraries, Shapely, for example, where then I can wrap that data up and I can get a full-fledged geometry object where you can do operations like the buffer in this case. So I think that I've got a bit of a recipe here for winning the double challenge that I introduced earlier, which is that the Python programmers get GIS data access using familiar APIs. So not GIS APIs, but built-in Python stuff, like say, dicts, iterators, generators, the good parts of Python. And then future Python programmers learn to do things in the fast and idiomatically Python way. So having talked about the motivation and kind of some design concepts here, I want to talk about the actual software design of Fiona and Rastrayo, or Raststereo. There's a, it's a Python package that you import, they use, that's the one at the top, the thing that most people are going to interact with. There's an extension module in the middle that's using Scython. Are there any Scython users in here? So yeah, Scython is pretty awesome. It's a, translates Python-like code into C, very fast C, and you get things for free, you get, say, Python 2 and 3 compatibility practically for free when you write your C extensions. Quite, quite nice. It's got, so we use things, we use this for fast loops, for memory views on NumPy arrays. We can release the Python's global interpreter lock in several places to get faster code here. And then underneath it all is the JITL shared library. Now, here's, here's an example of the API for reading raster data with Raststereo. And there's a, there's an open function which gives you a file-like data set object. So this has a lot of the same, I would say JITL's open method. It's, I struggle with it a lot. I struggle with things like having to go to the documentation to find out whether I need to pass a one or a zero for the argument to a pen to open, things like that. This, this method here is runs just like Python's open method. It gives you an object that has some of the same methods back. So you open a data set here and there's, you can see the name of it, like you could if it was a Python file. You can find whether it's open or not, like a Python file, or rather if it's closed, the encoding for feature data sets. So it's really, I've made, I've tried to make a very strong analogy to, to Python files. And with, with the data set object, the read method then gives you all the bands of the data back in numpy arrays. And this is like, in this case, if it's multi-band data, then it gives you a 3D array, right? And it's like bands, row and column, or whatever you want. The reason why I'm mimicking Python's, the standard Python software here is to kind of get some of the same, get some of the same properties in this software that Mike Bostock was talking about in D3, where D3 embraces the DOM, it embraces the way the browser works, right? Because for that, you get, you get free documentation, free, you get free, free support, right? Because you're, you're leveraging all this other documentation and all this, these other knowledge about how the DOM works out there in the web. I want to do the same thing in Fiona and Rasterio. I don't want people to say, I mean, I want, I want you to see this open method and you say, oh, that's like, that's like Python's open, right? Reading vector data is more or less the same way. So, more or less the same thing. You get, you open, pass a file, a path to an open function, you get a collection back. This collection is an iterator. So, like a Python file is an iterator for lines, this is an iterator for features that are G adjacent like dictionaries. And then the right, on the writing side, kind of, I think, as I've tried to make it as, I'm not going to say intuitive, but as transparent as possible, you, you open it in W mode, like you would a normal file. There's keyword arguments, you know, if you create a data set for writing, this is creating a new file on disk and you've got to set up the layout, you've got to set the data type, the number of bands, et cetera. These are all in the function as keyword arguments and you can get these keyword arguments from another file that you've opened. So, if you want to make a copy of something. The, and then you write an array to that. Does anybody recognize, does anybody recognize or not recognize the, the with statement there? So, this is Python's context manager. When that block ends, there's a exit function called on the object and then that's going to, like, within that block, you have a little runtime environment and then at the end of it, the context manager, that the destination object, the exit method is called on that and it cleans up after itself, it closes itself, flushes the disk, and whatnot. I think a weakness of the, the JETAL raster library is that there's, there hasn't been a deterministic way to write your data to the disk when you want it to be written to disk, right? You kind of, there's these cluges where you either delete the object or something like that. Myself, I like to, you know, when I write something, I would like it to be written and I'd like it to be, you know, deterministic. I'd like it to be written when I write it. So, we have that. There's actually a closed method on the file. You can do that as well. All right. It's vector data the same way. Nevertheless, you write GeoJSON-like features to it. For georeferencing, there's a great project called PyProge and I'm following the lead of this. So, it, PyProge embraces the approach for library and, but instead of strings, I think it kind of introduced to me the concept of, well, these project for strings were just keys and values kind of concatenated into a string, right? But there's a, there's a Python data structure for keys and values. So we're going to do this. We're going to treat everything as a dictionary. All right. So it's not these, these libraries don't have a spatial reference object or class. We're using dictionaries to pass this information around. I'm going to talk about two submodules of the Rosterio library and the first one is a features module. This has a function that gets the shapes out of your rasters. So this is all the features in your, in your array as GeoJSON-like objects as an iterator over them. And then there's a reciprocal operation that can burn those same features into another array. All right. So it all, it all works off of Python objects, Dixon iterators, tuples, the ender arrays. There's actually no data sets or layers necessary. So you can pull a numpy array out of thin air or create it in your computation and you can run it through the shapes function and it'll pull features out of it. You don't actually need to have a GeoTIFF or DRG or whatever, something like that. As an example, I took a Python logo, pulled the shapes out of it and then burned them back into another image. So this is just kind of around tripping it through, not keeping the colors in this case. I'm only doing one channel. There's also a warp module that gives you some of the same features but with all of JITL's cartographic projection machinery in it. So there's no, again, there's no data set needed for these, no TIFF, no GeoJPEG, whatever. Any kind of numpy array, if you give it a transform and a coordinate reference system, you can warp these into other reference systems. And then, as I said, the software embraces the command line. So I do a lot of bash scripting at work and I work with people who are bash wizards. It's crazy, the kind of stuff they do. And I don't know if you can see this, but some of this stuff is, it's super clever but there has to be a better way, right? There has to be something better than dumping the output of JITL info into grep and scraping it for the number of bands in your file or for the data type of your file or something like that. So who's done stuff like this? Raise your hand. Yes. All right. Okay. There's got to be a better way. I mean, it's not like, it's not like Frank is going to change the, Frank and Evan are going to change the output of JITL info, right? But still, scraping is kind of, I don't know, it's kind of sad. I mean, there's a place for scraping. I don't want to be scraping stuff in production. That's the thing. So I'm writing some new command line programs using these to get us into a little better place in our scripting. So this is RastroIO's command line interface, which is a program called Rio. And I apologize because this kind of means that you can't actually bash script with this and your, you know, a few other like hardware cards or something like that. Listing of the commands. So that's the help. I'm using a Python package called click to do the command line interface. And it's, click is really nice. It gives you really nice help for free with just a, you know, a few lines of a few decorators on your function. So Rio info, info is a sub command for this. And instead of giving you a text output of the file, it gives you structured data. So it gives you JSON out, metadata about the file and JSON. And there's a couple options for indentation in this and that. And you can pick single items of it out of there with another option. So with the count option, you can pull the count of bands out. You can pull the CRS out, the bounds out, yes, sir. And so this stuff is getting into production to work really quickly. People are loving it. There's the shapes command in Rio is command line program to dump. Well, it runs that shapes function in the features module, dumps all the features in your data set out as GeoJSON. And then I'm using this really nice program that I've, the underscore CLI, this library. Are there any underscore users here in JavaScript? So yeah, I'm using that to extract a particular feature from the GeoJSON. And then I'm shooting it into the GeoJSON IO command line to get it that. Okay, I'm going to finish on time, I believe. So recently, I think I blogged about this, but we got new releases of the software. So Fiona 1.2 is new, Rustereo 0.13. And then over the past couple of weeks, I've been working with people on getting the Shapely's version of the 1.4 out and released as well. So try this stuff out. Give me some feedback on how it's working. I'd like to thank the Mapbox satellite team, Chris and Charlie, Amit, Bruno, and Camilla. I want to thank my collaborators on Fiona and Rustereo and Shapely. So Asger Peterson, Mike Toes, Brandon Ward, Kelsey Jordle, Renee Bufa, Jacob Wasserman, Oliver Tonhoffer, Josh Charnut, Phil Ellison, Matt Perry. If you guys are here, raise your hands. Matt's there, right? Excellent. And of course, Frank Ormerdam and Evan Rue. That's the end. I think I have about a minute for questions. Questions? Anybody? Yeah. Just a minor question about indentation. Yeah, the default is like the Python's JSON module's default, which is none, right? So just, you know, all on one line with spaces in between. That's the default indentation. All right. Look for me later. Oh, sorry? Is it thread safe? I mean, can I run it across multiple cores? It's as thread safe as Jee-Dle. Yeah. I don't know. I've been thinking a little bit about threads. I know that the Python community is a little torn about threads. You see from core developers a lot of disdain for threads and saying, well, don't do threads if you're using threads. You're doing it the wrong way. I don't use threads very much. I think in production we tend to use multiprocessing for things. So to get cheap parallelization, we use GNU parallel a lot. So these command line scripts are usually run under GNU parallel to run them in that way. Threads, it's no less thread safe than Jee-Dle. Yeah. This is another basic question. How do I read Raster data into numpy arrays? Oh, so the, if you use Rasterio's open function on a file, so pass it a path of Raster data, you get your data set object and then that object has a read method that then gets you a multi-dimensional array. By default it gets you everything. But you can actually kind of prune, you can ask for specific bands or you can ask for Windows as bands as well because, you know, dumping gigabytes of Rasters on people is not always the best thing to do. Yeah. What's the easiest way to read a Rasterio app to be table? Python? Oh, I think that would probably still be in Jee-Dle's Python bindings. So, you know, you kind of, I developed the features as we need them and Raster attribute table really hasn't come up yet. But I think that I would probably, if it was, I love feature requests and yeah, and I'm happy to work with people on implementing this stuff. It's actually not as hard to do this kind of programming as you would think too. So, yeah. If you'd like to see that feature, I can help you get involved. Okay. I'm going to let the next speaker go and meet me around for technical support and install questions and things like that. Cool. Thank you.
|
Fiona and Rasterio are new GDAL-based Python libraries that embody lessonslearned over a decade of using GDAL and Python to solve geospatial problems.Among these lessons: the importance of productivity, enjoyability, andserendipity to both experts and beginners.I will discuss the motivation for writing Fiona and Rasterio and explain howand why they diverge from other GIS software and embrace Python's native types,protocols, and idioms. I will also explain why they adhere to some GISparadigms and bend or break others. Finally, I will show examples of using Fiona and Rasterio to read, manipulate,and write raster and vector data. Some examples will be familiar to users ofolder Python GIS software and will illustrate how Fiona and Rasterio let youget more done with less code and fewer bugs. I will also demonstrate fun anduseful features not found in other geospatial libraries.
|
10.5446/31600 (DOI)
|
Hello everybody, good morning. This is the crazy data presentation using Pulse.js to fix errors and handle difficult data sets. My name is Daniel Miranda and I'm with the Brazilian Federal Police. I'm a forensic sex seminar and I'm working with GIS right now. And I would like to talk to you about what crazy data is in our context, in the context of the Brazilian Federal Police. And the bulk of this presentation is SQL recipes for fixing stuff. Actually I would like to apologize because I left three items of the abstract out. I couldn't fit it in the 20 minutes, so sorry about that. And how crazy data comes into existence. So from our experience in handling this stuff, how does it come to be? And why is this all so important for us? And there's a treat at the end. We have a few bounties, but they are going to be awarded at the code sprint, not now. What is crazy data? So this is our very objective definition for something that's absolutely subjective. So it lacks metadata, it contains either too many or grave errors or it is too big. It is presented in an awkward format. It has more than one source. For example, the same data reverse from the Amazon. So more than one institution has mapped them and when they, when you get data from two different sources, you're bound to have inconsistencies. And it makes users lose hair. So some of these issues can be approached with the help of FOS.js, okay? So quick recipes. That's what our team used, by while loading 850 shape files and DXFs and stuff like that into the database and building a lot of views of that data and publishing it on our internal network. So that is mainly for forensics. So our forensic experts, especially in the environmental area, have to have a good idea of the place they're going to examine. So this is my favorite. Actually, I saw a recipe in the USTO website, but I think this one is simpler. That's how it goes. So you make the old polygon just a little bit bigger. You make it a little bit smaller, then you make it bigger again, and at the end it's the same size. Well, why did I do this? It's an algorithm borrowed from image processing. It's called, depending on the order of the operations, it's called the closure or opening. Have anybody, any one of you have heard of it? Okay, so that's new. Borrowed from image processing. So what happens is, if you have a spike to the inside of the polygon, when you buffer it, it will go away if you expand the polygon. But what about the spikes that go to the outside? So you shrink it again to the same side you started with, then you shrink it again, then you make it bigger again, and all the spikes will go, oh, sorry, all the spikes will go away. It just works. And what is that join equals meter magic there? We'll see. This is a post-CHIS buffer option. If you do join equals meter on the left side, it will create a sharp corner with just one vertex. So like the sides of the polygon dilate, and you create a sharp corner there. The one on the right is the regular buffer. What happens is that if you do all this shrinking and growing stuff with a polygon, with the regular buffer, you get the, what is that color? It's different from my screen. Light green and dark green. The light green is what you get if you use regular buffers. The dark green is what you get when you use the join equals meter. So it preserves all the vertices on the same place. So they stay on the same place, and all the spikes go away. So it's pretty simple. If you want to actually do it on your database, it's going to look something like this. There's a lot more casting and stuff like that. And the next recipe about invalid geometries. So on the left, we have a polygon. That's actually an invalid polygon because the vertices were supposed to be counted clockwise, and the sides weren't supposed to be crossing. So it's invalid. And what happens if you run ST buffer? Oh, does everybody here know the Post-CHIS basics? Oh, okay. So if you run it through ST buffer, that's what you get. For me, that was completely unexpected. I knew something good wouldn't come out of it, but that's what came out. Okay, so what if you run it through ST make valid? This function is available since Post-CHIS 1.5, I think, and that's what it will do with that polygon. That doesn't mean it has fixed the polygon because you're not sure if that's what the user meant. So maybe the user meant a square, a tilted square, but they put the vertices in the wrong order and it came out like this. But this is what the computer can do automatically. That's what ST make valid does. And it creates a multipolygon from the polygon, and then the buffer comes out right. So that's a very simple recipe, ST make valid. I'm guarantee your polygon is what your user meant, but it will be valid. Okay, what about validity? There's another very simple recipe that will have your database tell you where the mistake is. So if you run this function, it's valid detail. You get a set. You don't get a single value. You get a set. If you extract location from the set, it's a point. So that's the specific one did have a line crossing there. And if you run the other, if you take out the other member of the set, which is a reason why is that invalid? In this case, it will give you that there is a ring-self intersection. Okay, next recipe. Filling holes in unions. If you have, for example, several parcels, and you're going to make a union of them to build a bigger polygon, sometimes this doesn't go quite right, and have holes inside, because the sides of the polygons inside don't touch exactly. So what do you do? This is really simple, and it actually speeds computation because in between the computations, there are less vertices. So if you buffer each one of the parcels, then you do SC union, then you buffer them back. This is a negative, sorry, that is a negative buffer. So the same has to do with the first recipe we spoke about. So grow everything a little bit, then you shrink everything a little bit. But does not preserve boundaries. So if you also, like we spoke before, if you use join equals meter, it's better. But what if it's not a union? So you have a raw polygon, you want to fill in the holes, how do you do this? This is the simple way to do it, if it's a polygon. Doesn't work if it's a multipolygon. So you extract the exterior ring, and you make a polygon out of it. So all the interior rings, which are the holes, are going to be thrown away. So you get, oh sorry, you get a fill polygon. If it's a multipolygon, it gets hairy. Because actually, you have to have a query inside the other query, and then you dump the geometry, break it apart, and you do that for each polygon. So that's the recipe. I don't know how you're going to have access to this presentation later, but I will try to publish it either at the OSTO website or the event website. Let's picture people, how is that going to come out? Well, if you actually write it in your code, there's some casting there, should do. And the other recipe I would like to speak about is a speedup of large data. So if you have a layer with a lot of vertices, and you want to render that, the thing is PostgreSQL 9.3 has a new functionality called Materialized Views. So what is that? You create a query of a layer, and PostgreSQL will internally create another table, just like as you wrote, create table, as select, whatever you want to do. But it's not a simple table. It's a Materialized View. So you can create indexes on it, you can refresh it, but it doesn't update automatically. It is not run for every query you run on the Materialized Table. So that's really good for rendering high density maps at lower scales. So if you build a Materialized View of a very heavy map, that's what you have. If you have to tune the parameter of the simplification, so if you zoom in, you'll see that it breaks down, but at a larger scale, I don't know if that's larger or smaller scale, but if you pull further away, you get a nice closed lines. And you have done this simplification just once on the server, and it works well for data that is more or less static, and it will render much faster. Just let me show this code again. You have ST simplified there. You can improve that ST simplified by making it preserved topologies and stuff like that, but since you're going to render it at a different scale, it doesn't matter much. Okay, so this is the second root of all evil for the crazy data. The first root of all evil is lack of metadata. That's what I experience in my work there. But that's the second one. So people don't check the data before committing it to the database. So the lamest possible check that I could think of was that one. So you add a check, so PostgreSQL has this structure, you add a check, and you call ST as valid on the geometry, you're just putting it on the record. And if it fails, it won't commit the record to the table. There are our algorithms for checking data from our production there are very complicated to show here, but they use triggers. So they actually modify the data once you put it, for example, if somebody just modified a center of a polygon, it will edit the polygon and make the polygon move to the new center that the person wrote. So it will do funky stuff with the data. But this is too custom and too complicated to show here. But that's the idea. This is mostly not, this doesn't concern PostgreSQL. So this is the holy grail. If everybody had checks on their data, it would reduce greatly the amount of errors that creeps into the databases. And okay, we got all this crazy data. We have these recipes to deal with it. But we would like it to not exist in the first place. We would like the data to exist, but we would like it to be sane, not crazy. And large data sets, why are they crazy? Because they are big. So that's one of our definitions of crazy. It's when it's too big for us, it's crazy. And you can't escape that. You can mitigate the problems. You can reduce, simplify, deduplicate, do stuff like that, but it's always going to be a problem. Lack of validation. What does lack of validation generates? There's lots of topological errors and it accepts bad geo-referencing. So that is mostly for legacy data. Right now, the public institutions in Brazil have very strict, very strict regulations on how to produce this data. This doesn't happen on new data much. But when you get legacy data, that's a problem. Okay, what does reprojection have to do with crazy data? Reprojection is just to translate to projection. Oh, it's a problem when you have topological errors. Actually, I should have done a drawing here, but I'll just mimic. Let's say you have a partial, a square partial, and you split it on the middle. So it's topologically correct. It's four vertices in the two parcels. And you put an extra vertex on one of the parcels and it doesn't have a counterpart on the other parcel. Okay. So now you'll reproject that. That point on the middle of the edge will not be in the middle of the other parcel's edge. The other parcel had to have a counterpart to that point there. So if you reproject, you will detonate the bomb that topological error is set. And geometric operations. Okay, so when people simplify polygons, if the map is okay, it's all, oh, I showed the other one. I created here. Okay. That map was pretty well closed. The borders were matching. But when I did a simple simplified, bring everything, open up gaps, and because it is not a topological representation, PostJS has that, but I don't have time to deal with it to show it to you right now. But if you don't have topology in mind, that's what will happen if you just simplified the polygons. Diversity of sources. So if you have like 850 data sources, you're bound to have very different formats, very different conventions, and that's very difficult for us to handle. But doesn't concern PostJS, so I didn't have a recipe for that. Legacy databases have awkward data formats like DXF or stuff like that that's on several DXFs using different layer names for every, for example, RiverData on one DXF is layer four, on the other one is layer eight. So you can just put them together. And imprecise definitions. For example, if you have, we actually have that in our production, the legacy data, it asks, there's a point where the system asks, what is the coordinates of your forensic report? So the person doesn't know if it's the coordinate of the place that was examined or the coordinate of her desk. So somebody, people put the coordinates of their desk, sometimes people put the coordinates of the unit they went to and very crazy stuff. So this had to be more precise. Okay, why does it matter for us? Our production data is shared on this web portal, it's internal, it's not available on the internet. So these dots over there are the forensic reports we produce. So we show that on the map. But the issue is support data and intelligence data. So that is a very simple, this is a fraction of the rest of the imagery you have from the Amazon and from the last, the more over, less populated areas of Brazil where most of the forestation happens, pollution, illegal mining, all kinds of bad stuff. So we have this data for reference, so it's old data. So it's old data and we have it to measure the amount of damage that was done. So before it was like this, now we get a new image, we fly with a UAV on top of it, we image it again and then we can have a reference and see how much was damaged. So this is for support data, so we have a lot of data here we have to process. And the real nitty-gritty is here, it's intelligence data. Because the forensic expert has to know a lot about the places going to. We gather data from several sources, so we have from unofficial roads in the state of Pará to mining licenses. Like I said, it's 850 different things and more than 950 views of it. So in this specific map we are showing mining licenses and the engine reservations and environmental preservation areas. And it's a challenge to deal with all that because for every source you have you have to build a different connector and assimilate it. So you run all those recipes on top of the data, make it sane and put it in a database. Ok, so now about the bounces. Last time I went to phosphor G was Denver 2011 and I put in some bug reports at the OSGO track and whoever got to these bug reports and solved them got a, I don't know, got a souvenir from Brazil. So let's show some of them here. This is a t-shirt. This is a beach outfit. You can see Ivan using one of these in the corner. He got one last time. So that's Frank, that's Oliver, that's Tai Chi from OSGO, OSM Japan, that's a Paul and that's Jorge from, he was the president at the time of OSGO Portugal. So these guys got these souvenirs at the time. Some of them on Orescausa, not because they solved bugs but they did get the souvenirs. So keep, if you're interested, it's mostly QGIS stuff. I didn't get to pull all the, to put in all the feature requests yet. But if you're interested tune into that Twitter channel. Thank you very much. We're open for questions. Oh, and there's the mugs. There's the mugs too, okay? Very nice mugs. Oh, hi, I have a question. How do you handle what might be called version control? Do you have naming conventions for all of your intermediate steps or how do you, do you keep the old, you know, bad data? Well, I never thought of that but I do. I'd go like 0, 1, 2, 3, 4, 5 and then I keep all of that and I use the, I don't use version control because it's a mess. It's across, I don't know, 10 servers, that stuff. And then you just kind of know what the highest number is the right version. And they're just all tables in post-GIS. No, actually my scripts are all Python. The connectors are all Python. And they run these recipes on the database. So I have a staging database. I load everything on this and I process it here. Then I upload just a difference to the main production database. Thanks. Thank you. Thank you. Donc depuis. Ok.
|
Inteligeo is a system that stores a lot of information used by the Brazilian Federal Police Forensics to fight crime, initially in the environmental arena with a later expansion to other types of crime. During the construction of the database a lot of problems appeared for which PostGIS was the key to the solution.This presentation describes problems encountered by the team while loading 850+ shapefiles into the database, linking with external databases and building 950+ views of the data.Although the content of the recipes is very technical, the general concepts will be explained in an accessible language and correlated to real world cases.Topics:*Definition of crazy data in our context*Quick recipes- Spike removal- Invalid geometry detection and fixing- Filling holes- Raster image footprints- Hammering data into correct topologies- Speeding data visualization with ST_Simplify and PGSQL 9.3's materialized views- Rough georeferencing using an auxiliary table- Creating constraints*How is crazy data generated and our experience in handling each case- Large datasets- Lack of validation- Reprojection- Geometric operations- Topological errors- Imprecise definitions- Legacy databases- Bad georeferencingWe will also discuss why is handling crazy data important for the Brazilian Federal Police, our efforts in cleaning up data at the source and the implications of geographical data in general for fighting crime.
|
10.5446/31608 (DOI)
|
Everyone here, here's me, okay? It's great to be here. It's great to be here. And I will present the user case from, as an example from the previous speech, we're using a QGIS and QGIS server for our web map application. So, just a short presentation of where I'm from in our city. Kuhnsta in Sweden, not Stockholm, but in the very south of Sweden and founded for 400 years old town, about 80,000 inhabitants in the area. And the Danish king, Christian IV, he founded this city on an island in a swamp surrounded by lakes, so it's very, very good then for defense purposes, but now it's a bit wet sometimes. This dark area, there is a below sea level and we have a pump station here that pumps out the water. And then we have a little scenery from the river. In our department on the municipality, we are about 16 persons that work with all from measuring, map making, GIS and system development and so forth. And my name is Karl Magnus Jönsson, I work as a GIS developer there. And then a little short history. In the past, five years ago, all was propriarity. We had a database, Oracle and we have some GIS software, it's from Intograph and some web map application. We paid for it and we didn't really know what we got for that money and as a result, it was expensive and not very good. So we started, just like Saratoulos, we start with a mix, mixing little open source software. So the first step was to, we thought, the web map was too bad. So we started in cooperation with some other cities in Sweden to develop our web map framework called S-Map. It was good, but we have to have some services also as well. So we started to play with a GIS server with a Geo server installed and then we will have some data too and try to connect to Oracle, but it wasn't that good. So we decided to have a database management system as well, post-GIS and then transfer data. And this was good, but then we had, we also start with QDS desktop to make some better maps as well for desktop. The problem with it, we had low options to build better web maps, but it was still expensive and we had to do the double date and double cartography. We had to do maps here and here. So we had to think, wait a minute, we have two databases to keep synced, migrating data, three ways of creating maps, at least three. And we thought that speed maybe wasn't everything. We have to deal with this, how we worked, the knowledge of the people who worked with it, the data organization and the flow of data. So we would have to be, do a more simple solution. So we thought in last year, we decided to skip this proprietary GIS. So we're there. Okay. Okay. And then to avoid too much double work, we're working to not use GeoServer that much and instead use QGIS and QGIS server and not to be, we have other stuff here as well. So it depends on Oracle database, but we will try to move as much as possible to this post-GIS database. And then we have lower costs and same data and cartography and it's easier to add me with just one line here. Simple, great line and beautiful. But in the real world, it's a little bit more rusty and complex. So why are we using QGIS server? And why would you use it? In our case, we decided to use it as a desktop GIS and then we can take benefit of the, to reuse the cartography and the projects we have built up and come to easier configuration of the web maps and not to do the things one more time. So we have fewer systems to be experts on. And our goals with the project is to have, as it says, easy, fast and powerful system for the users, updated maps with metadata, easy to create for the advanced cartography that can be reused and easy and flexible to administer the maps. So we looked into the, this isn't going to be a very technical speech, it's just to reduce the cases, you can have ideas and do the same thing at home. You know QGIS desktop, powerful desktop GIS, can do a lot of things, things. And this QGIS server is maybe not that well known. But it's an application that you can serve VMS and VFS services on a, it's installed on a patch, what's the server. So it's easy to publish these services directly from the project in your desktop and you get exactly the same look as in your desktop project. All configuration of the services done in the project and all the other properties. If you use powerful symbology and labeling in QGIS and you got some extras as well, we have web based printing of the box. So the idea is to, you have your desktop as you work as, in this case as an administrator with our web map layers. You have the same layers that is in the web map and it's easy to change here just to click on the layer and edit the properties. And then you have the same, this is a web map application, you have the same layers, the same cartography. And just how you do it, it's just in the project properties, you fill in some, enable it and fill in some metadata and some other stuff. You can restrict VFS capabilities so if published as VFS. And you can, in the field properties you can mark as published as VMS or VFS and you can, these attributes. And here's an example of a layer that we first created in SSLD in new server. It looks like this. And we have one more page and we have one more page. So we have a lot of code to do and if you are good at coding it's not a problem but we aren't that many people in by us that are good at coding so. This is the same layer in QGIS so we have graphical interface and you can do the same but for ordinary GIS user. And just some examples how it looks in real world. We have an example for the user of the web map application. It's some metadata about the layer and you can click on that with VFS and get the information. And the metadata is to come from a table, put all the information and also the configuration for the map. I can show you some useful cartographic features of QGIS. It's the three label from expressions and rule based styling, symbol level and also a very useful feature to save the style to the database. So example of doing custom labels. In this case two labels from two attributes to make one label and there are a lot of options. It's very powerful. You can build expressions like this to concatenate to attributes. There are a lot of styling options. Rule based styling is like as in you make a rule and maybe a filter and the zoom levels and you can have a symbol attached to that rule. So you can make and then you can attach symbol levels to make this what you call them, rows on the bridge. Look nice. Always bridges over tunnels. So and then when you finish the style you can save the style to the database. We are using a PostGIS and then the user in desktop wants to use the same layer, can bring that up from PostGIS and he gets the style as well. No random styling. We started with, I believe it was a year ago to work with QGIS server and we started with some base layers in the map. There are much cartographic work. We have the roads, we have texts and what do we land with QGIS. It's a same static that does not that much updates and to get more speed we have cached these layers through your web cache. We have a few published layers, the base maps and the whole project cached as one layer in your web cache. And as it's base layers we have no information to get feature info or VFS on them. This year we are worked to do the overlays as well in QGIS server. Then we have many layers and they are not static so we can cache them in the same way. And so we are trying to cache them through a varnish cache that is cache on the web server that caches all requests to the web server and if it's remember I got this tile, a request with tile just before and then it serves it from the memory. And we also have VFS get feature requests against this QGIS server. The reason why we tried to have varnish cache is that we have, if you cache through a file cache as a web cache or some other cache, you have to set up it, set it up one more time and define which layers and which names. And in this case we just save our project with maybe a new layer or new style and we don't have to flush the cache and recede and it's taken care of automatically. So the web map solutions we have built is built on SMAP framework, our own framework based on OpenLayers, jQuery and almost all our layers it comes from QGIS server, some from the user server. And the configuration is made from this metadata table and we have also the varnish cache on the web server. We made some extra functions to see restricted layers, you have to be logged in to the application that is connected to the AD, Microsoft and varnish. And we have almost done export function from POP and FME software, safe software. So here's the idea of the flow. You have the administrator down left, works with QGIS and saves the project file to the QGIS server and the style to post GIS and put some metadata in the metadata table. And the user come from up there, can choose between the browser, web map or the desktop. It's the same data, it's the same cartography and we have the caching mechanism here and the configuration from metadata. So it's just about it. So our experiences of using QGIS server, the installation process was quite straightforward. We are using a Linux server and a Pesh hotest base server. I haven't done the installation myself but I heard it was no big problem. And using the system as in real good, the administrator can create its maps and save it to the server and it's like that on the service then. And performance, it's not the fastest server, services, OVS server. I believe that both user map service is a bit faster but our real plus there is, you don't have to learn this SLD or some tricky code language to do the cartography. And with the cache mechanisms, different cache mechanisms, we were able to deal with that. We have some problems with when we're upgrading from different versions, we can talk about that later but that's a lesson we learned. Do the upgrades, reprecaution and much testing. Future of this project, we are trying to optimize the caching mechanism and how we're dealing with this. Maybe we're looking at the QGIS web client that is tightly connected with these services and the print service and so on. I think that's what all I have to say. So if there is any questions, I try to answer them. Are there any questions? Yes, it looks like a great advantage is the styling. Yeah. The QGIS desktop and the SLD is, you're not using the web browser component, it's open layers, is it able to handle pretty much everything that's in the SLD implementation in QGIS? Or are there any incompatibilities? I understand in Geo server, of course, SLD looks pretty daunting to a non-developer and it would be a great advantage to be able to do that in an environment like the desktop and then have it completely compatible in the web browser. Yeah. Which is that you apparently that is the case now? Yeah, I think it works and maybe you can do some, I know, I've done some Geo server coding too and I know you can do terrific and great things and less but we don't have the time to do that and it's too tricky. We have to do, we have ordinary GIS engineers and not developers in that mean. So in this case, the user can do the cartography and the styling itself and just save it and do it and publish it herself or we can help to do the publishing. So it's the greatest advantage, I think. Okay. Hi. I was just wondering if QGIS server has like any scripting process to automated processes in the servers activities like Python or Ruby, something like that? No, not that I know. Maybe you could, your source code got down there. No, I don't believe that. It's quite simple with server but it's very good for the purposes. QGIS server hasn't any scripting capabilities built in. Usually you do that as a way to use a WPS or something like that if you want to provide services. But there is now an experimental branch including a Python interpreter in QGIS server so maybe that's coming but it's not decided yet. Then it's Python. Are there any other questions? Well, let's thank our speaker one more time. Maybe you can do the scripting in post-GIS.
|
Kristianstad municipality in Sweden has since 2013 been using QGIS and QGIS Server as a base in our GIS platform. Our goal is to have a user friendly, yet powerful, set of applications from server via desktop and web to mobile applications. All based on open source. QGIS and QGIS server has several functions that makes it easier for both the users and administrators of the systems. That could be saving styles and attrib-ute forms to the database, styling and publish WMS and WFS directly from the desktop QGIS application. With a combination of different types of caching mechanisms we achieve fast and flexible services for our web applications. These open source projects, sMap and sMap-mobile, have also been designed to be fast, flexible and user friendly.
|
10.5446/31609 (DOI)
|
Hello, good morning everybody. My name is Andrea Aimen. I'm a GeoTools and GeoServer core developer working for GeoSolutions, an Italian company that's providing global support for GeoTools, GeoServer, GeoNetwork, Seekin, and a few other open source GeoSpecial tools. Today I'm here to talk about the recent developments in raster data support in GeoTools and GeoServer. So let's have a look at the technology stack that GeoServer and GeoTools use to deal with raster data. This is more or less the block diagram that we have. So the main libraries that we use to build the GeoServer. So GeoServer, as you can see, is built on top of GeoTools, which is, in turn, built on top of a few open source libraries dealing with raster data. In particular, we use Jai, Java Advanced Imaging. Java Advanced Imaging is a very nice library, very powerful, very extensible, flexible, that provides us capabilities to do many kinds of image processing, such as the common ones, such as clipping, rescaling, warping images, and so on. And it's very nice in that it's style-based, so that we can handle very large imagery without actually having to load them fully in memory, but process them bit by bit instead, and thus handle both very large inputs and very large outputs. The one thing which is not nice about the library is that it was developed by Sun, and it is no more under active development. However, it is so extensible that you can basically replace every single bit in it with your own implementation, just retaining the overall architecture. And that's what we did in the Jai-Xed project. We basically created a set of droparine replacement for the Jai operations that we need, that are higher performance, pure Java, that support no data in raster data, that support regions of interest, that have a number of fixes compared to the base Jai, and that support masking properly. This is the set of operations that we implemented so far. So as I said, we implemented more or less what we need in GeoTools and GeoServer 2 to the processing, and we pushed up the scalability of the library quite a bit by implementing some key elements using higher scalability tools. If what Jai provides is not sufficient, we also use Jai tools in GIFL, which are high performance raster processing libraries that implement new Jai operations, such as vectorization contour extraction, and also a raster algebra, a very efficient raster algebra that allows us to do computation on raster data in an efficient and flexible way. Of course, before processing the data, we have to read it somewhere. So we have ImageIO, which is, again, a Sun library that provides support for reading PNG, JPEGs, and whatnot, but also very importantly, TIFF and GeoTIFFs. And it has a native acceleration for some of them. Fortunately, again, there is no source code for the native decoders, and development has more or less stopped. But fortunately, again, the architecture is pretty much like the Jai one. So we are free to replace and improve upon it. And that's what we did with ImageIO-X that provides higher performance raster IAU compared to ImageIO, provides addictive support, which wasn't already supported, provides faster JPEG encoding and decoding via TurboJPEG and via GDOL. We can support an extra variety of input and output formats, such as JPEG 2000, TCW, Mr. Seed, and so on, and so on. So GeoTools builds on top of all this, providing higher level tools for dealing with raster data to represent coverages, to represent their georeferencing, to project them, and so on, by putting together all the tools that I've just cited in a sensible whole. GeoServer builds on top of GeoTools to provide all the OGC services that you are used to use, so WMS, WCS, WFS, and so on, WPS, with Google Earth and Google Maps support. So this whole stack provides a very capable set of raster management abilities and the associated OGC services to do the publishing of this data. Now, let's see what we recently achieved in terms of new functionality in GeoTools and GeoServer when it comes to raster data processing. So this is a little pet project of mine. Last Christmas, I was a little bored. I had some time to work. And I implemented a new PNG encoder, which is easily 50% faster than anything that we had before. So even if it's pure Java, it actually beats hands down the native PNG encoder that was provided by image.io. And this gave us quite a bit of speed up any time you have to encode a PNG image in output, which is common when you are dealing with vector data, of course, when publishing vector data. But it's also important when you are dealing with raster data, because some of your maps can be digital elevation model and the like. And in those, you probably want to use the PNG output format. We had quite a bit of push in terms of improving the quality of reprojection of rasters on difficult projections. Here we have an example on the Polar Stereographic projection before when trying to reproject a whole-world image. This is a rain precipitation model covering the entire world, corner to corner. Before, when we were trying to reproject it to Polar Stereographic, you can see a slice of the pie was missing. There was some problems fitting the right resolution for the output. This is the new version of the output, which is cut to the equator as the right resolution, and everything fits into the map. We also added support for crossing the dateline with the raster data as well. So if you have maps that are sitting across the dateline, common in New Zealand, but common also in every organization that deals with worldwide mapping, now we support exactly what we have supported for years on vector data, and that is replicating the data that you have on the other side of the dateline so that you have a seamless map. And you can see your map as a whole, even if you are looking at the Pacific dateline. We added support for masking in image mosaic, which is very handy in case you have imagery that is partially overlapping with each other and has a bunch of no-data elements. You can associate it to each image that you are mosaicing together, a vector mask that tells just where the good data is, and just where we'll do the cutting for you and the mosaicing for you. Of course, this is useful for mosaicing, but in case you want, you can also use it to cut, cookie cut your imagery to a certain shape. In this case, we have very high resolution data showing in the Bolzano province in Italy on top of the Blue Marble normal data. So they actually have a slightly larger coverage, but they only wanted to display the higher resolution data in the region, and the footprint support that we implemented allows them to cut the imagery exactly where they want. In the server we implemented during the last year, the web coverage service 2.0, which I dare say is the first sane version of web coverage service ever. It's easy to use. It can be used by human being. WCS10 and WCS11 were plagued by a very complex syntax to do the request, instead in WCS 2.0 we have a very nice, simple syntax to do most of the normal things like extracting a bounding box, or rescaling the image, or projecting it without getting crazy in the process. We implemented a number of extensions. Basically, everything in WCS is an extension. So a projection is an extension, a rescaling is an extension, two-tiff encoding is an extension, and so on. So a base WCS server actually only implements extracting a certain area. Everything else is an extension, which a server might decide or not to implement. You server basically implements them all in order to be compliant with another profile, which is the Earth Observation profile, which basically demands everything plus something. WCSIO, Earth Observation profile, we implemented this for DLR, the German special agency, along with WCS, and along with a good support for NETCDF. WCSIO is interesting because if you have multi-dimensional data that is rasters, that is not just a flat map, but it's a hypercube of data in time and elevation dimensions. And maybe you have the runtime of your forecast and so on. So it might be five dimensions or more. WCSIO allows you to advertise the extra dimension and describe them so that people can build a request that extracts exactly the bit of hypercube they want. In the Geo server, we actually implemented also a new feature, which is coverage views, which allows you to take a complex raster, like maybe a seven-band Landsat image and just decide that you want to serve three bands out to the user, or take several coverages, that is several files, and merge them together as if they were a single virtual coverage, taking bands from the several files. This is the current state of the coverage views. We are going to push it further by allowing computation of new bands on the fly. I'm going to talk about it later. During the last year, along with the WCSIO effort, we made a lot of work to support more complex coverage readers. In particular, we support the NCDF and Grip formats, which are common formats for supporting hypercube imagery, that is imagery which is not just a flat snapshot, but which contains, I don't know, the elevation several times, sorry, not the elevation, temperature at several times now and in the future, and at several elevations. So that is a three, four, five-dimensional model questions later. And we implemented it so that we can expose it via Geo server. We can mosaic it because several times you have NCDF files and you have several of them, and they form a seamless sequence in time. So you actually wanted to put them all together to form a virtual hypercube that is made actually of the little chunks. This is an example of a data set that we are serving with NCDF. It's the polyfamous data set which contains the concentration of three gases at different elevations and at different times. It's one of the data sets that DLR is serving with Geo server. We took what were the general characteristics of NCDF and Grip and made a general programming interface that you can implement in Geo tools and Geo server to serve whatever other data format as inside a single source, multiple coverages, multiple times, multiple elevations, and so on. So we didn't stop at implementing NCDF and Grip. We opened the door for implementing whatever other format, whatever other multi-dimensional format you might want to support in the future. We expanded the user interface of Geo server to handle this. So if you are used to Geo server, you know that you publish a GeoTiff. It tells you, oh, there's just one layer inside this GeoTiff. Now when you are going to publish a NCDF or a multi-dimensional mosaic, it will list all the coverages that are inside. Because as I said before, then polyfamux example contained three gases, so three actual phenomenon in the same file. And we pushed it a bit farther to describe each and every slice that you can find in these hypercubes in a filterable way. And again, we rolled out interfaces, programmable interfaces that people can use to implement their own introspection of a multi-dimensional dataset. And we used them to expose the structure of the datasets, both in WCSAO. As I said before, there is a described coverage call that you can call to figure out what's inside your dataset, but also by arrest configuration operations, so that you can actually grab a new NCDF, throw it a GeoServer via the REST API, so make an upload, have GeoServer harvested, figure out what's inside, and then GeoServer will expose the contents of this NCDF file to you again via the REST API, so that you can basically throw it a GeoServer and then inspect its contents once it's published. And once it's harvested, you can also decide to throw away certain bits that you don't want to show to the users, like throwing away a certain period of time, throwing away a certain range of elevations, or update some of the metadata. So it's pretty flexible. You can do a lot of stuff with it. So far, I talked about supporting inputs from NCDF. We can also support output in NCDF, which makes a lot of sense, if you think about it. We can read multidimensional hypercubes. It makes sense to also be able to output them. So we implemented a NCDF output format for WCS, which allows you to, as I said, specify a little hypercube of the large hypercube of data that you have and extracted in a format that still supports it in its multidimensional specification. This is just an example of the call. This is streaming in WCS, so I specify the bounding box via the long and lat subset here. And then I'm specifying a set of elevation and a set of times. So I'm basically building a four-dimensional hypercube and asking GSTover to return me the results in NCDF format. And then what we generate is a climate and forecast convention compliant, a NCDF, which can be opened with most of the desktop tools around. So tools UI, Panoply, and so on. In WMS, we also implemented the Earth Observation Profile, which is great if you have complex satellite images that you wanted to publish to the public. The profile assumes that you wanted to provide a sort of a browse of an overview of your data, and then you publish different views of it as sublayers in a tree. So you can publish the footprints of your data as a vector layer. You publish the several bands of making up your acquisition and allow people to filter on those bands to get only the ones they want. You can publish the derived data. So not the raw data, but you apply some processing. And I don't know you decide what's the concentration of chlorophyll in the water, and you can also publish it as part of the Earth Observation Tree. And finally, some of the flags, such as where were the clouds, where were the flares, that is every single bit that might compromise the quality of the data in the other layers. I already described most of this. We implemented for this scientific data support also a dynamic color map support, in which basically, GeoServer can be told to apply a color map not between certain fixed given values, but basically apply the color map stretching out from between the minimum and maximum value of the data that I'm displaying, or the particular layer that I'm displaying to better visualize the dynamic characteristics of my data set in a certain area. What's cooking? What are we working on? So we are working on native node data support. Most scientific data has a notion of node data is a particular value in your data set that says, I don't know what's here. And right now, JIA does not support it. So it basically takes that value and interpolates it, and scales it along with the rest, creating artifacts. With JIAX, we implemented full node data support, and we are going to merge it into GeoTools in the next few months. So my hope is that we are going to have a full node data support in GeoTools and GeoServer by March 2015, when we are going to release GeoServer 2.7 in GeoTools 13. We are working on high performance raster algebra via WPS using the GIFL library, which is a high performance pure Java library for doing raster algebra between various raster layers. And it's very nice, because it basically allows you to specify your own little script doing the computation cell by cell, with offsets and whatnot. And then it takes it and turns it into bytecode, into JVM bytecode. And then the JVM will say, OK, I'm using this bytecode over and over and over while doing the computation over the cells, and it will turn it right away into native code. So within a split second, you will actually have your own script that you've wrote in a text editor running as native code in GeoServer. So top speed. And as I said, we are going to mix this concept with coverage views so that you cannot not only select the bands you want to use from one data set or several data sets, but also say, OK, I want to add a few extra bands which are going to be computed from the other dynamically using a Jiffl script. And again, top speed, because it's going to be turned into native code the first time you use it. And this is it. This is more or less the overview of what we implemented recently. So if you have questions. In which versions are all of those features at the moment? Aside from those last ones are two seven, all the others are two six? Exactly. Yes? OK. Would you else like to make a phone? You said you support a Grip format as a one or two or both? Two. Yeah, I forgot to write it, but I'm sure we support the Grip two because that's the format that was given us by UNITSAT, which is a European organization that deals a lot with Grip files. I honestly am not sure whether we support also Grip one or not. Don't think we just tested it. You may have mentioned this, and I might have missed it. But can the raster view configuration, can that be done programmatically via the REST API? Of course, yes. The coverage views attaches an extra bit of configuration into the coverage resource. And as usual with the REST API, what I normally suggest people to do is try go use the user interface to implement the bits that you would like to actually automate with the REST API, then read the resource, the XMLJson resource out of the REST API and look at how it's done. But yeah, it's there. Yeah, I've always had a trouble making true color images in GeoServer. It can make color maps fine, but because of the no data value issue. I think, I mean, maybe there's a way to do it, but I've never been able to figure it out. I've tried Alphalare, and it just, it is not transparent. Is that due to this GeoTools issue? It's actually a JAI issue. JAI was made by Sun on contract with NASA. And NASA needed it for satellite imagery, so they really didn't have any no data. So they didn't actually care about the problem. So the problem is that every time you interpolate or rescale an image, you actually have to take a bunch of pixels and compute a new one. And JAI doesn't skip the no data one. They just blend them in as if they were valid data. So you get some odd averages being done in those processes. And that's what we have fixed in JAI. All the operations are now no data aware, so they are going to skip the no data and just care for the good parts of the image. That said, JAI is going to be emerging to GeoTools as a replacement for JAI by March 2015, as I said. The other option that you can have now for image mosaics, there are two ways. And an image mosaic can be made only of one image if you want, so you can actually use whatever feature image mosaics provide for any image. So one thing is to create vector footprints saying where the good data is, and the mosaic will cut them. Another option is that when you are using the mosaic, you can specify the input transparent color and the output transparent color. And you can actually tell it to add input transparent color is minus 99.99, which is my no data. And it will get you some way towards what you want. It's not going to fix everything, otherwise you wouldn't have implemented JAI. Yeah, sorry. Is this issue with the no data and the JAI tools, net CDF, prevent you from working with multiple resolution raster data? Not really, no. So multiple resolutions, you mean in the same mosaic? We can actually support heterogeneous mosaic having data at different resolutions. What we don't support yet is having image mosaic made of data in different projections or different color models. We actually have a deep fork of GeoServer at one of our customer sites that handles both of cases so that you can actually mosaic gray and RGB and SAR data in the same image, and maybe they are in different projections, but it's not merging to the official GeoServer yet. One thing with opening up to net CDF support is you bring in potential for huge data. Dealing with large 150-year daily time steps, you have 55,000 time steps that almost half a terabyte for a standard Precept, Tmax, Tm file. What kind of optimizations have been done for speed or caching or time slicing, whether it's time major or minor? OK, so in terms of net CDF, we worked quite a bit on the topic you just spoken about. Generally speaking, when dealing with net CDF data, the actual spatial extent and the actual resolution of the grid is not that great. As you said, you have a million times, several elevations, and so on. So they multiply and this grows to a very huge data set. And the problem is getting quickly to the time and elevation you want among the giant data set you have. For that, we create indexes on disk. We create one index file per net CDF file to quickly locate, given a certain time and elevation, the start of the data in the multidimensional array. That's one thing that we do. And then we have a global index for the mosaik, in case you are putting together several net CDF files, which is both spatially and temporally and whatever other dimension index it. So we keep it in a relational database normally, so that you can index all the columns that define all the dimensions. And when a request comes in, we quickly locate via that index the net CDF file that satisfies the response and which part of the net CDF file actually contains the data we want, and go to the disk and read exactly from that offset that part of the multidimensional array we need. So it's pretty fast. So I guess same question. Have you done the same optimization for a grip format? Yes, of course. Exactly the same structure is prepared for grip format. Cool. If there are no more questions, I guess we'll do for the lunch break.
|
The purpose of this presentation is, on a side, to dissect the developments performed during last year as far as raster data support in GeoTools and GeoServer is concerned, while on the other side to introduce and discuss the future development directions.Advancements and improvements for the management of raster mosaic and pyramids will be introduced and analyzed, as well as the latest developments for the exploitation of GDAL raster sources.Extensive details will be provided on the latest updates for the management of multidimensional raster data used in the Remote Sensing and MetOc fields.The presentation will also introduce and provide updates on the JAITools and ImageIO-Ext projects. JAITools provides a number of new raster data analysis operators, including powerful and fast raster algebra support. ImageIO-Ext bridges the gap across the Java world and native raster data access libraries providing high performance access to GDAL, Kakadu and other libraries.The presentation will wrap up providing an overview of unresolved issues and challenges that still need to be addressed, suggesting tips and workarounds allowing to leverage the full potential of the systems.
|
10.5446/31610 (DOI)
|
I want to bring together some of the ideas we had. Is that better? Alright. That we pulled together using a lot of different tools and just kind of go through how we use those tools. So, I'm sure most people here know what the National Park Service is. They have a lot of iconic places. This is the Arches and Moab, I guess Arches National Park. This is Yellowstone, Golden Gate, and Keyknife Yards. And this is Washington Mall, or National Mall in Washington. So, we also have an iconic map style that when you go to the parks, you pick up these maps that are printed. They pretty much give you a good idea of the park. They are just on paper, so they're a single scale. So, while they're pretty detailed, they're only useful for just an overview of the park. And we've had a lot of them that are pretty nice and the colors go together well. So, I work on the NPMAP team. The NPMAP team's goal is to create web maps. And we take these original paper maps and figure out ways to put them onto the web. So, our mission is basically to make it easy to build and deploy these beautiful maps. So, we have four internal tools that we use to do that. We have something we call the builder, which is an all-encompassing tool that you just go in and you can throw your information directly in there and create a nice map. One example that we have is the Bears and Parks map. And if you want to get more creative, we have a back-end tool called NPMAP.js. It's kind of mimicking Mapbox's JS library. But their library is really good and we wanted to extend it a little bit to do our specific goals. And that is all driven in the back-end by we have this Park Tiles project. And this is really what we do to pull those park maps into tiles that we're using. And this is all built off of Mapbox vector tiles right now. We have terrain in the background and we draw our own park polygons around it. We've done a lot of work to get these polygons to look at least fairly decent. We're still working with the parks to try to get better polygons. And just I have some examples of the different scales that we use. And this is really what's important when you're doing a web map as opposed to a print map. So, like Grand Teton, you can zoom out or zoom in. And you can see the detail that we have of the different icons in the park. You can see the picnic areas and I guess that's the beginning of a hiking trail. And you can see the same thing with Acadia. So, the way that we want to get all this information in there is through a project that we call Places. And we created this just to make it very easy for anybody to come into the map and work on it. And the real problem is that a lot of projects, a lot of parks out there have some really, really great information. You can look anywhere in their park and you talk to their GIS staff and they know exactly where things are. But some other parks, they just say, oh, we have stuff, we're not sure where it is, but it's in this general area. They generally know that they have trail heads and they know where they are as far as walking, but as far as putting on a map, they don't have the technology to do that. So, what we really wanted to do with this Places project was motivate people to go in and edit. And that's really kind of a hard thing to do when you have a lot of people who aren't really familiar with technology, aren't have been doing these bean park rangers for a long time and really just spend all their time out in the park. They don't really know much about how they're going to go into these systems. So, we kind of identified five ideas that we were looking for when we were building this. First thing we needed is a great interface so people could come in and really just build off that information and create a good map just by looking at tools that are already out there. We wanted a data structure that you could really go in and modify to whatever you want, but still has some structure that already exists. We also wanted a good back-end API that wouldn't fail on us, so we could have one or two people editing at a time or we could have 100 people editing at a time. And we know that it's proven to work. We like to have a good ecosystem of tools so that you can really build off of this data. You can break it up into different extracts by park. You can break it up by just if you want campsites, which is something that's pretty popular. And another thing that we really wanted was a good feedback system so you could go in and you could really say, okay, I made this change. Let's go see what it looks like on the map and you would be able to see that within a short period of time. So, the next thing we were going to do was we were looking at different systems and we saw that OpenStreetMap was out there with the great ID editor, which you probably heard about earlier today. The OpenStreetMap data structure is the key values, so it's extendable to pretty much anything you want it to be, but there is a defined wiki that people generally use to come up with most of their tagging. And we tried to map our ideas of tags to what's going on in OpenStreetMap so we can go in between the two systems. OpenStreetMap has an API that is pretty substantial. It works pretty well. They are constantly doing upgrades on it and it's pretty cool. We use a lot of these OpenStreetMap tools, particularly Jossim is how we get all of our information into our system, but you can use these other tools to do different kinds of extracts or just data manipulation. And one thing we really liked about OpenStreetMap is the feedback where it comes back within a few minutes. So, if you go in there, you can see it on your screen. So, we decided to fork the ID editor and we came in. We changed the icons over there on the left side with our own National Park Service icons. And we changed the tags that go in behind it to only show tags that we really want to show on the map. We left all the tags in the ID editor so anybody could go in there if they add something from OpenStreetMap. It will show up with the proper name, but we disabled the search functionality for those. So, people aren't going to accidentally add things that aren't really required on our maps. The great thing about ID is you don't need to know what the tagging is behind the scenes. I put a little window up there, I'm not sure how easy that is to see. But it's really confusing to users to go in there and just look at this list of tags and say, oh, this is what you mean by this. So, ID makes it really easy to just click a button and say Ranger Station. You don't have to go through this whole mess of tags. And we also add a unique identifier at the bottom there each time someone adds something, which is very useful for internal tracking. And we needed to create something for this database. So, we went out and had some interns a while ago go out to each of those, print it out maps, and figure out where each point was, put it on our map. And this was created a while ago and it's not the best source of data, but it's a good starting point. And once people go in there and they see that there is something there but it's not completely correct, they get motivated to start moving things around and saying, well, I know that this is really over here. And we found that just by putting a little bit of information in there, it really helps get people motivated and interested. So, to explain the rest of how it works, I was just going to go through some steps. So, the first thing that you do when you enter this editor is you have to log in. Right now, we're actually using OpenStreetMap as our login platform. We just send them the request. They send us back an API key saying if the person is validated or not. And we decided to do that because it's really great to be able to say, this is the user who contributed this. And when we want to write back into OpenStreetMap, it's really great to have that full history of who did each edit. We are looking at also working with Active Directory internally. That's just something that the government really has inside to use. So, it's great to be able to use OpenStreetMap, but we are looking at other methods of logging in as well and just kind of tools to keep track of everybody's contributions. So, let's see. When they save their contribution, we do a kind of three-step process where OpenStreetMap has a... a pretty simple database called API 0.6. It really just keeps track of the latitude and longitude of each point. It has no idea what a geometry is. It doesn't even...the latitude and longitude are not even floating point numbers. They're just big numbers out there and everything is linked all the...all of the tags are linked through a separate table. So, we go through and transform that into something that is similar to the schema called PG Snapshot, which is just one of the tools that OpenStreetMap uses to render its data. And we use that for all of our queries just because that is a spatially enabled database and we can create just BountyBox and pull that out, which is a lot quicker than querying just based on raw numbers. And we can also do spatial indexing on it, which makes it a lot quicker. And we also throw it into the last step, which is the rendering step, and that is almost ready to be displayed on the map. So, just as I mentioned before about the API-6, the latitude and longitude fields, they're just very large numbers. And there's nothing really spatial about this other than those could line up to some WS-84 coordinates. So, we bring that into this PG Snapshot database. The other interesting thing about this is they use HStores as the field to store all of their tags. So, you have access to everything without linking the table, which makes it a lot easier for rendering tools to really pick it up. So, then we get to rendering our maps, which is probably the most fun part, I think. It requires a little bit of working with all the tools. We match up those tags to what we had originally called them. We just have a SQL script that goes out there. It pulls what tags they had listed and it matches them to the name that we had set up for that. We rank things in order by which, like, what their importance is and how we want to display them. So, when you first look at a park, you're going to see something like visitor centers. But as you zoom in further, you'll see more things like picnic sites, way sides. And for this rendering, we also keep a timestamp. And that's just so when we run a script that updates our databases externally, we know when the last time we ran that was and we can build from there. So, that's what step three is. This is a process that we are currently running every 15 minutes. And we can't write out to our maps immediately because we are using Mapbox Studio to create vector tiles. And we also are writing out some of our points just to CarterDB because it's a very great way to look at vector information out there. So, we run into this program called TileLive which builds just tiles that you request. So, we just give it a list of tiles that have been updated in the last 15 minutes or since the last time the process ran. And TileLive will run Mapbox Studio on just those, that small list of tiles. And then once that's all done, we upload it to the Mapbox servers and Mapbox is what's serving out all of our tiles in nice, easy to use formats. We found that it takes about 24 hours for everything to update and we've been working to see what we can do to get it to be faster. But just because of caching and just CDN stuff, it takes a little while. So, the other thing that we do with them is we bring them out to CarterDB which is a lot quicker getting your information out. But it is not baked into our tiles at all. There's no difference in layers. So, when we throw these on top of our maps, they will obscure all the labels. And it's just because they're not knowledgeable of what's in the rest of the map. And they're a separate data source entirely. So, they are really useful for putting on quick maps that people create, especially if they want to use something like the builder to create a map of all the campgrounds in a park. And that's one other thing that we check for is we, when we render these, we figure out which park it's in using Postgres. And we use the STContains feature to find what park they're in. And we connect that to the map. So, in the future, we really want to add buildings to our process. ID easily supports areas. And ID is a great editor to add things to. It's really easy to work with and they made it really easy to extend. Internally, we have some more problems with buildings because there's just data management issues with different parks. We have a lot of information and it's really just a matter of whose information are we going to use and how are we going to create a crosswalk between the traditional GIS information that we have and what we want to put out on our public-facing maps and how we want people to contribute. So, one of the things that we're really interested in is connecting our stuff with the Esri feature services. And we talked with Esri about the COOP project, which is a really cool tool that they have that allows you to access, all kinds of stuff through Esri feature service and you can use these less than open source tools with your open source information from, GitHub is probably one of the coolest ones that you can do. You can just send it to a GitHub repository and open it up in your proprietary software. And while a lot of our stuff is open source, we do have a traditional GIS department that does work with these tools. And we want to make sure that everybody's included in the community and that they can use their own tools as well as the tools that we've created. And we think that's a really good way to get everybody really motivated to work with it because they don't have to relearn something. They can use their existing tools. And one of the things that I'm really interested in doing is getting this information back into OpenStreetMap. And the OpenStreetMap community is not like you to just take your information and overwrite what's already in OpenStreetMap. It just causes problems where your information may not be more correct than what's already there or you really shouldn't be deleting other people's work without at least going through and verifying it. So they've created a tool called MapRelet, which was originally just used for fixing errors, but they are using it for some small conflation tasks. There was an OSM-Y as another project. I think that's how it's called, or maybe OSM-LEE. I've never heard it pronounced. And that project is about adding parks in Los Angeles and conflating them to what's already in OpenStreetMap. And both these projects work really well and they've really shown that people are able to take these small tasks of just one or two bits of conflation and work on it slowly and get the whole thing into the map. So I just want to go over what our mission was again. We really want to build and deploy these beautiful maps. And we have these tools like the Builder and our NPMap.js library to create them. And then we can make the beautiful maps through our partiles. And then we use Places as just our data source. And we use a lot of open source tools in the back end to really get this to come together. We use a lot of stuff from Mapbox. We're using Mapbox Studio. We're using OpenStreetMap's whole API, which we mostly use in Node.js now. And we have a PostGIS, I think it's 9.4 database now with the JSON support. And it's really pretty great for what it's doing. So that's pretty much all I wanted to talk to you guys about. I don't know if you have any questions or anything, but we've got a lot of cool stuff. I'm glad you guys listened. Thanks. Thank you. Which of those tools that you mentioned might be applicable to non-park service? So I guess our internal tools, the Builder and NPMap.js are really built to create maps in the National Park Service style. But what is pretty applicable to other people is, or at least groups outside of that, is just how we are using OpenStreetMap as an interface. And I really think that OpenStreetMap has a greater ability to be used than just in one project that it's on. And that the tagging scheme and the tools that are built around it are pretty interesting. And they can do a whole lot, and someone spent a lot of time developing these tools, and they're still very well supported. And also, we use Mapbox Studio, which does a lot of just data rendering and that. That's pretty cool too. I'm curious how much did you tweak or modify ID? And was that mostly through kind of config options, or was there a lot of code change? There was not a lot of code change at all. It's really easy to reconfigure ID. The main thing that we had to do is we had to get rid of anything but points. In our Buildings Editor, you have the option of adding polygons. And we still haven't gotten to adding trails, which would be through the Lines Editor. We had to make a few changes to just small things to make sure that we have to add a tag in there. We want to, that is a link back to our other information. We wanted to be able to disable fields so people couldn't go in there using ID and change these things. You can still go in using something like Jossim and do that. But we didn't want to give all the users the ability to change the ID field. That's kind of the identifier field. That was just something that we pulled out of there. And I guess the other big thing we had to change was how the login process works because it goes to our server, then our server goes out to OpenStreetMap, then it brings something back to ID to say, this is all logged in and everything's good. Is your fork of it on your GitHub site? The fork is public on our GitHub site and we try to keep it as up to date as possible, but they do a lot of edits on ID. So I might be about three or four days behind now. Yep. So two kind of related questions. One is, is this service open for anybody to edit or is it only staff? And the other one is, is the data itself available? All right. We really want this to be open to the public, but as of right now, we don't have a full validation phase going through. So we can't, we can use this data on our maps, but we can't validate external edits. We'd really like to get the public involved, but right now it's still an internal project. As far as the availability of the data, there's no reason why it shouldn't be publicly available. We just use it on our maps right now. If, I mean, if you really need the information behind it, it's probably not very good right now. So we kind of don't want to release it. We're still trying to get internal people to update it. But I think as soon as we have something that we can, we can say this is pretty good. Then we can start releasing it, but we're still working with a lot of the departments, a lot of the different parks and getting better contributions in each park. And I just don't think it's the right time to release it until we have a little bit better idea of what's in there. What has been your reaction you've seen from the Crusty Old Rangers at this park for their entire career? And I know this map is wrong, but I've never gotten that wrong. So the Rangers are pretty interested in sharing what they know. And if there's something that they see wrong on the map, they're usually pretty interested in updating it. But there are some kind of problems with talking to these Rangers where there's a lot of stuff in Open Shoot Map now, like trails and things that are through protected areas or closed areas, or the roads that lead to an area that is protected. And these Rangers do not want that stuff to show up on the map. They don't want people to even know there's a road there, and they have been known to go into Open Shoot Map and remove things. And that's kind of a hard point for us to look at because one idea, we want to draw these roads there. They do exist. They are something that you can use to help locate you, especially if you're walking down the road and you're looking for someplace to meet somebody or something. It'd be great if you knew there was a road there. But at the same time, if the road leads somewhere that you're not allowed to go, we don't really want people to go into the map ahead of time and say, oh, there's this road that goes right back to this place. And while I don't really feel we should restrict information, I think that since it's kind of a little bit more of an official map, that maybe it shouldn't be on the official map, but we should still keep track of it just so we know it's in the right direction. Just so we know it's there and we can render it in ways that show that you can't go here, but it does exist. Well, so I've looked at this from a number of ways. USGS also has a fork of OpenStreetMap that they use for their GNIS collection. And that information is really good now because they have stewards in each of the areas that keep track of it and contribute back to that map. And that information is publicly available and anybody can just go get it and put it back into OpenStreetMap. OpenStreetMap was created with, or when the import was done from GNIS, they kept the information that linked it back so you know exactly what point was moved and you can really keep track of that. And I wrote a tool that would do just the GNIS data. And I think that it's kind of cool that you can do that, but there really isn't a lot of linkage right now between these other projects. And I personally would like just have one big project that you could put everything in and everything would be great. But the big problem we have with OpenStreetMap is licensing and liability. If we just use OpenStreetMap as a back end for our maps for everything, which we do show roads from OpenStreetMap, but if we use it for everything, we would have the problem of liability where someone could draw something on there that doesn't exist. They could, you know, there's been issues where they move a park boundary or someone has a map that shows a park boundary in a different spot. And there's hunting laws that are different in different areas. And you know, licensing is a big deal with us because we have to put everything that we make into public domain. It's just how our system is set up and we can't contribute or we can contribute into OpenStreetMap. But once it's in there, it gets tangled with other OpenStreetMap contributions and it becomes licensed through OpenStreetMap. We can't, we can take that information back and display it on our maps, but we can never re-release it because now it has a license and we're not allowed to release licensed data. So it's just a big kind of problem that we're trying to get. And there's some, you were trying to get solved and there's some good tools out there now like MapRulet's really working on conflating these different systems together and trying to create OpenStreetMap as a base map of all this information that's collected elsewhere. And I think that's really the goal and that it's okay to have some forks of it as long as they know that they are contributing to this main map and that they have slightly different information that fits their needs. Thank you. Cool. Thank you.
|
The National Park Service has many well-known sites, but many parks do not have the GIS resources to maintain their map data. The Places project aims to solve this problem by empowering non-technical park employees and the public with the ability to make changes to the map. The Places project uses custom versions of existing OpenStreetMap tools for data collection and uses them to create an up-to-date base map for National Park web sites. This presentation will discuss how we plan to motivate mappers, how we deal with data validation, and how we plan to continue working with OpenStreetMap.
|
10.5446/31618 (DOI)
|
Hi everyone. Thanks for coming. I've got a lot to share so I'll just go ahead and get started. Okay, why OpenStreetMap? We, in transportation, a lot of new applications that are coming out on the market require a rentable network unlike before. We have several new transit applications that we've implemented in the last several years including an open-source multimodal trip planner. And what we found after doing kind of an alternatives analysis is a typical centerline file doesn't really meet these requirements. They're what you find in most jurisdictions, they're not seamless, they're not designed for routing. Proprietary solutions like Tele Atlas, Navtech, Google, they're just not affordable, not for government agencies, especially when you start licensing per vehicle per bus. And also it's not community-based. We were really interested in working with the community because we span not just three counties here, we span down along the I-5 corridor. But as you can see this is an intersection as represented in a typical centerline file. It's hard to hang all the attribution that you need for routing purposes. And here's the same intersection in OpenStreetMap. A lot of the turn restrictions are inherent. There's also different turn restrictions. There's legal turn restrictions. There's physical turn restrictions. There's turn restrictions for buses that are different than vehicles. And OpenStreetMap is really designed for this. So again we started this back in 2008. I applied for a grant and was awarded it. We brought in open plans, several other developers. David Emery, Brandon Martin Anderson, and our team at Trimap. Frank Purcell played a huge role in the development of this. And what OpenTrip Planner does, it takes three open data sources. The general transit feeds back, which is the schedule, transit data, the national elevation data set, and OpenStreetMap. And it takes all of those and it generates a very intelligent, routable network. And the basis of this really fuels a lot of the applications that we use at Trimap. The multimodal trip planner, where you can plan a bike to transit trips, unlike a lot of the other trip planning applications. It tends to save time. It's intermodal, truly intermodal. We also have a call taker application. When you call 238-RIDE, we need a lot of information all in one spot in order to provide the customers who are calling in with the information that they're looking for that they can't find or get through normal avenues. This is all powered by OpenTrip Planner, uses the routing algorithm, uses OpenStreetMap. Same thing for a field trip scheduling application. I don't think a lot of people realize we work with schools and other agencies that use Trimap to move large groups from one place to another. And we manage that through this application. Also, I'm very proud to show OpenTrip Planner analyst. It's an analyst extension built on top of OpenTrip Planner that Conveil built and designed. And it basically is designed to show differences between previous service, current service, new service. It calculates differences in travel times. It also has underlying data sets such as employment information, census information. So you're able to get this at the push of a button. It's pretty cool. I think it's going to change a lot of things. And if you guys would like to see a demo later, I can connect and show you guys. I'll be here, you know, the rest of the conference. And then also, Oregon State is using OpenTrip Planner and the back end to look at transit agencies and travel patterns actually across the state of Oregon. And Open StreetMap really facilitates that. Again, it's a routable network across jurisdictional boundaries. And they're doing just amazing things with this. Open StreetMap, we also use it internally. Grant Humphries, who's here in the office, actually generated these maps. But Open StreetMap is used again for planning and analysis to generate ISO maps, routing simulations, things like that. Our computer-aided dispatch system was actually implemented a year ago, and it requires a routable network. By default, it uses a proprietary system. However, that proprietary system was just way too costly. So we worked with the vendor, and they were pretty excited about working with us and actually using Open StreetMap as the back end to do the routing of the display. Again, show the location of the vehicles, dispatch appropriately, things like that. We also have Open StreetMap up and running in our left lift pair of transit MDTs. Get from point A to point B. And in our fixed route scheduling system. So basically, Open StreetMap is our standard base map within Trimet. Again, this is due to a lot of new applications that are coming out that require that seamlessness, that routable nature. So we started improving Open StreetMap with the community back in 2011. We started with the three-county area and expanded to the four-county area. But I hired four PSU students, Mellie Sacks, Burnett, Grant Humphrey, PJ Hauser, and Betsy Breyer. And they did just an amazing job. It went much, much faster, much, much faster than I thought. And what we did was we focused on improving the line work, adding in geometry, aligning it within the right-of-way, correcting and verifying attribution, adding it in. Again, I mentioned turn restrictions, both physical and legal. We added in street limits, impedances, both for vehicles and pedestrians, and added in a lot of waterways and parks that wasn't there. And the method that we used, if you want to find out more, you could just Google search on OTP Final Report. But the method that we used, we used digital ortho photography as a backdrop and then also jurisdictional files as a reference. And it went very, very quickly. It was pretty interesting, the process that we used in order to speed things up. We actually made the jurisdictional data set look like Open Street Maps, so you just really copy-pasted a lot of the attributes. We tried to minimize the amount of typing in there. We were very involved with the community at that point online and then had a couple open houses. Now, continued maintenance in the 7-County area. Ryan Peterson, I just hired him, gosh, within the past month. He's here in the audience. And his main responsibility is the coordination and maintenance and outreach for Open Street Map in the 7-County area. Adding in more attribution. The intertwined metro, they're in need of trailhead data, so we'll be adding that to Open Street Map. But again, Ryan, the picture below him is an Open Street Map community and the metro, but the government agencies, city-county governments that are providing their data to us. It's a group effort to keep this maintained. The business justifications for this FTE, full-time employees, they're very, very hard to get. And I basically justified it doing a cost analysis comparison with the proprietary data sets. That was just a no-brainer. And it was really appealing to get updates as needed or on a real-time basis. New bridges, new pedestrian bridges go in. The one by OHSU, the day it opened, people were planning, you know, walking trips across that. It also aligns with Trimats and the City of Portland's Open Data Policy. And just the awesomeness of the Open Street Map community, they are really, really enthusiastic and really proud and protective of this data. And also, you know, just looking at it, and if we're going to invest $25,000 for a one-year subscription for a proprietary database for just a small area, one-time fee for just one of the applications, you know, that doesn't, what we did was we took that $25,000 instead and hired the students and now we have a product that actually also benefits the public. There's also a lot of various stakeholders that I'll get into and different ways of using the data outside of just Trimat. And I was also able to pull the money together for internal Trimat departments who were saving money by using Open Street Map. They contributed ongoing funds for the position Metro, our Regional Council of Governments, and Salem Kaiser Transit as well. I think that that really showed the importance of this position, this work, both internal and externally to the region. And again, regional data for those of you who aren't local to Portland, our Arles data set is for the three-county area. It's maintained and or integrated from various jurisdictions by Metro and then distributed. Open Street Map data maintained by Trimat in collaboration with the OSM community Metro. We are focusing on the seven-county corridor, seven-county coverage area. And Spider OSM is a new tool that compares actually Open Street Map with Arles and other area shapefiles. It reduces duplication of maintenance efforts. You never want to maintain two things, but in this situation we have two products that meet two very, very different needs. But yet we want them to both reflect the same level of accuracy. And this tool actually also allows us to increase the truth of the data and ensure accuracy and consistency. But you guys might want to go out and check it out. It's called Spider OSM. And a shout out to Michael Arnold, the developer of this from San Francisco. But Spider OSM basically, it takes any jurisdictional file, any shape file, and runs it through a process and compares it with the Open Street Map data. It's in alpha. There's a QGIS plug-in and Esri plug-in. And I think we're going to see more and more tools, tools like this. The maintenance project process, this is small intentionally. We're currently working this out with all of the jurisdictions, but again identifying the main sources and the verification process for all of the different attributes that are needed by the community, by trimet, by jurisdictions. And legal issues. Hallelujah. Our list, again the data set for the three county area, just a huge abundance of very, you know, accurate data and information. They just released it under the Open Commons ODBL license, same as Open Street Map, to again make it easier. And again in the Portland area, the license issues aren't really a problem. Most of the county and city jurisdictions, it's all open, free, and that's very different though from licensing issues for a lot of other agencies nationally. Again I think that there's a benefit to using jurisdictional data. It benefits both parties, it benefits the community, it benefits government agencies, but the license issue for Open Street Map is really prohibitive and restrictive. A lot of agencies, you know, I get calls, what license do we put our data under? We want to work with Open Street Map and you get a variety of different answers. And it's also difficult for government agencies, you know, to hire a legal team, go through this, they really need strong business justifications for that. So I want you all to imagine, again this is a fictional site, but what about in terms of use? I want to get people thinking not all government agencies can put the same licenses Open Street Map on their data. What about having a website where government agencies can go to sign a terms of use agreement just like a lot of the other websites for like the GTFS data, Google data, Google agreements, you have the process lined out there for them, tools, they sign the data sharing agreement, they host it on a collaborative site. And I think that this might really promote the, again the collaboration between public and private. I don't know if this can happen, I'm hoping some of the Open Street Map, I'm hoping I can find some of the legal team here and discuss that a little bit more. And why collaborate? I, there's so many reasons, so many reasons and again the biggest benefit I think is to the public because they're getting better data from both the private and the government sector. Also, you know, them contributing to it, the cost savings, I could go on and on. There's also, you know, tons of mobile applications out there that are using Open Street Map and a better Open Street Map benefits all the users that are using it and all the developers that are developing off of it. Even in Portland we've got Nimbler, Ridescout, there's Riot Amigos, there's just dozens of applications, mobile applications that are coming out that are using Open Street Map as an open trip planner as a back-end routing engine. So is this all pie in the sky? I don't think so. I think that we can all work together. Government and private, a triment we've had a lot of success with that. Working with private entities, Globe Sherpa, Conveil, Open Geo, Google and it's just a matter of being open to it and really understanding the justifications. I think really identifying and working out the legal issues would help and better education. So I'd like to thank everyone, especially the Open Street Map community, Triment, Metro and the Intertwine for funding the Open Street Map improvements, local jurisdictions for their data and their support and then also Sozi and Inkscape. This presentation was not done in Prezi. It was done in Inkscape and Sozi is an open source presentation tool that I think beats Prezi. So just a shout out to them. Okay, any questions? Yes? You mentioned the licensing issues. How would that, can you make me more familiar with that or I'm curious how does that interact with the municipal government? The license agreement for Open Street Map, it requires, it's somewhat restrictive and it requires that any contributions you give to it and take back, it has to be shared and be open and a lot of government agencies aren't there yet. Yes? How would you say the scale of the city of Portland affects the way you can engage with an open data community? Is it easier for a larger city? Is it harder for a larger city? Are there any factors like that you'd like to talk about? I think it varies in Portland. The Open Street Map community is really strong and I mean it's pretty easy to go in and see the edits who's editing, you know, the types of edits, things like that. There are a lot of mapping parties that got started but even when we went into the Salem area for the Salem Kaiser Transit for the area there and was using Open Trip Planner really looking at the data, we were so surprised at how good the data was there and I think it just depends on the community of users where they're at. Urban areas tend to be, you know, less accurate than of course the rural areas tend to be less accurate than, you know, the urban areas but it really depends. It's pretty easy to go in and see the number of edits. How did you maintain updated your data set with Open Street Map data until now? Because I read that the spider is quite a new tool. Yes. And before, how did you do? It was more of a manual process that we were doing to compare the data and the information. Spider OSM is really going to to speed that up and we've been working with Michael Arnold. He's actually been using our data to test it and to improve the product. Hi, Eric and I'm with Community Transit. I'm interested in whether or not you have those numbers available for the business case and just generally what you would say to an agency that's reluctant about any kind of open data in a region where there's a multitude of agencies and some are on board, some are not. And also I'm very interested in the analyst. A lot of people are very interested in analyst. I get asked that so many times and I actually, all of this is in the OTP final report. Again, if you just Google search OTP final report, I have a whole section there on the cost justifications. Why open data? Why open source? At Trimet, our IT policy is that whenever we go out and look for software solutions, we actually consciously and intentionally look for open source software and compare them side by side with the proprietary. We don't always choose that. It's really reliant on the requirements, but it's the same thing with data. I think it's just a responsibility. Hi, excuse me, I came in a little late so if I'm asking something that was already covered, I apologize in advance. I guess if you could just speak to your transition from in-house source data to using OSM, what were some of the challenges, what were some of the lessons you learned in that transition? Well, open-trip planner, again, I think that that was the big motivation for looking at it, for adopting it in a course in an agency. One base map standard is better than many for a variety of reasons. Moving it to, it was really the users because OpenStreetMap, just for instance in our scheduling system, they were using the centerline file and it wasn't really meeting their needs and so they were creating like their own base map in there and it was hard to get updates. With OpenStreetMap now, they work with us to improve the data and it makes the flow much easier. So I think it was really the users of the systems, the vendors were all for it and I think the vendors are all for it because they have a lot of issues. If you see a lot of the MDTs or dispatch systems that are being rolled out, it's only as good as what the data is. So I think it was pretty easy to adopt internally. Not much resistance. No other questions? Going once, going twice?
|
OpenStreetMap (OSM) is now TriMet's standard source for routable base map data. TriMet utilizes OSM for internal systems and applications that necessitate a routable base map including, but not limited to: Computer-Aided Dispatch/Automatic Vehicle Location (CAD/AVL) system; Call Center and Field Trip applications; LIFT paratransit mobile data terminals (MDTs); OpenTripPlanner, an open source, multimodal trip planner; and fixed route scheduling system for on-street service.TriMet is now a committed, contributing member of the OSM community. Working with the community and local jurisdictions is a standard business practice supported with a full-time employee (FTE) that is dedicated to OSM maintenance and associated datasets in the seven counties area. This effort sustains the increasing number of systems in the agency that require routable networks, and it supports seamless multi-agency trip planning and analysis in the region.This presentation will include:¥ Emerging technologies on the market that require a seamless routable network, and why OSM is an obvious solution to fulfill new system requirements¥ TriMet's OSM Improvement Projects for the seven county regional area in Portland in support of vehicle, walking, biking and transit routing in the four county metro area¥ The business justification for a dedicated FTE in support of continued maintenance of OSM¥ The financial support for this position which demonstrates the recognition of OSM's importance from both an agency and regional perspective.¥ Benefits of collaboration between the OSM community and government¥ Facilitation of progress in this area with open data policies, data portals, and enhanced software tools
|
10.5446/31622 (DOI)
|
So, the topic for this G1YC at Phosphor G, and what we usually do is curate a topic, is around diversity. It's a topic that is really close to my heart, and I think if you're here today, it's something that you care about as well. In the past, I've talked about diversity in terms of the metrics of who shows up, who doesn't, who's represented, who isn't in this environment. Today I'm going to try to do it from a different perspective, and I've invited a handful of speakers, which I'll introduce as we go through. And they'll also be talking about angles around diversity in mapping from the idea of identity, and power, and creating open learning environments. So, and I'd like to think that in terms of dialogues around inclusion and diversity, that Open Geo is actually, and the space around Open Geo is actually leading many of the initiatives. So it may not always look like that, but I think we have a very open, respectful dialogue that's happening in at least the communities that I am a part of, and so I hope that we continue to build momentum towards that. Anyway, I'm going to talk about being awkward, which I think is really key to diverse groups, diverse communities, diverse conversations. So I'm not going to talk about OSM child care tags, and if anybody gets that joke, that's probably a relief. But we're going to talk about instead tension. Yes. Good. I'll just laugh in the audience, because in my point of view, from where I stand when I talk about diversity, I talk about, or I think about trying to deal with difference and being comfortable with difference, even when that feels awkward, and lots of times it does, and even when that feels weird, and lots of times it does, and weird is important, especially since we're in Portland, and word on the street is that Portland is the epicenter of weird. Now no comments from the New Yorkers in the group, because that's the order of people from Austin, but Portland right now, here we are, it's an exciting time, we are in weirdness. So just to start it off, to loosen people up, if you haven't had any donuts, who here in the audience is weird? Oh wow. Okay. It's one of those questions you're not really sure if anybody's going to answer, and like, oh, I'm normal, or everybody's weird. So apparently everybody here is weird. Sometimes at GUNYC we do like go around, we introduce ourselves, but we're going to do it this way, weird. Can somebody say why they're weird? Aha. So how weird are you? Huh? Nobody? Don't say you like maps, because that's not going to work. Whoa, okay, what? So there's duct tape I heard. Duct tape, art? Okay, always, and I'm sure you always carry duct tape around because you never know. Okay, weird. You in the back. That's you. I think I'm really normal, but y'all hate me weird. Wait, is that a joke? Okay, for the people who are listening, streaming, it was so weird, I had no idea what she said. So we're going to have to get to the bottom of that later. Anybody here weird? Oh, we have somebody weird on our panel. Lizzie, please tell us how you are weird. Lizzie plays hand bells. That is pretty weird. And oh no, this gets more serious. Yeah. Not only does she play hand bells, but she arranges music for hand bells. Okay, so I think it's clear that we have some weird people in the audience. I'm just going to share, because I didn't know who was going to say what. I'm going to share a little bit of what makes me weird. I really, really like fat animals. I kind of want to be a manatee in my next life. I mean, I think it's the coolest thing. I hear they smell. They're kind of extinct. Motorboats don't like them, but they chill out in the water and they do nothing all day. That's my dream. Sometimes I like to talk into a fan to hear my robot voice. Move on from that. I really can't help it, but I like to color code everything. So I was thinking about this today, and there are some things that I don't color code, and I know there are people who do color code. Like I knew this woman who color coded her underwear to her scrunchie. And I don't go that far. I just want you to know. But I do color code a lot. And this one annoys most of my friends. This is the last one. But I need to walk on the left-hand side. I don't know what it's about. It actually runs in the family. My brother and I would have to walk single file. But I need to walk on the left-hand side. Maybe that's why I don't drive? No, it doesn't work. And that's just always. So as your hands indicated, there's lots of weirdness. We sometimes share it. We sometimes don't. A friend of mine, sent this, Facebook, quote from Dr. Seuss. So I think it captures a lot of what's possible when we really embrace difference, when we really embrace our weirdness. So let me say it out loud. We are all a little weird. And life's a little weird. And when we find someone whose weirdness is compatible with ours, we join up with them and fall in mutual weirdness and call it love. This is when you say, aw, isn't that cute? I want to find my weirdness. Yeah, okay. Huh? Who's found their weirdness here? Okay. That's sweet. But usually life looks a lot more like this, which is pretty boring, pretty normal, pretty bland. Because I don't think we go into social space and be like, hey, that's so weird. I'm looking for weird. I want to find more and more weird. No. We're like, oh my god, you like fiction? I love fiction. You like blue? I love blue. And we call that cultural fit. We call that being on the same level. We call that common ground. We call that even like our shared humanity. This really oftentimes we find like a search for sameness is what's unifying in social spaces and technical spaces. And my premise here today is that if you really want to embrace diversity, that you want to search for tension and you want to use that constructively and gracefully and embrace complexity. And that's the, these are the pathways towards really I think unifying communities. So this is when I start preaching. In my point of view, difference can be as unifying as sameness. But actually here is when I start preaching. So premise for today, apparently I have a calendar invite that I, okay. Balancing the tensions of difference is my pathway to true divergent thought. And I can be different and that means complete whole and a contribution to others. Hopefully a contribution to the people and the spaces that I'm in. This is when I feel like I have found diversity. And this is when I feel that I have created a space that's open to diversity. So I like to make spaces. I feel like they're very creative like endeavors. And when I go about creating spaces, I seek to create spaces for different people. And I really try to look for difference and try to curate that in my approach. And really there's a good deal of selfishness in that. I do that because I'm trying to create a space where I can be different, where I can be myself. So and where I can kind of celebrate my own difference. So from my point of view, we are all different and diversity means embracing difference. Embracing difference means being comfortable with tension. And tension is the heart of everything that's magical in the world. And so tension is how we get things like laughter and love and Dr. Seuss and art and maps and music. And so in creating the spaces that we're kind of performing today for GONYC, I'm going to use some music metaphors. And hopefully they give some structure, some kind of framework for when creating diverse spaces. Because I think there are many different ways that you can use tension productively to create diversity. So GONYC, if you use the music metaphor, is like the classical ensemble. There's structure to it. It reminds me of like my high school band. We sort of play with a motif that whose structure allows for freedom. And it goes like this. We have eat, we have food, we have talks, we do GEO news, we have banter. That's my role. Sometimes it works, sometimes it doesn't. But it's a form that we play with. Map time, which hopefully you guys will learn a lot more about over the course of the week. We have somebody on the panel, Lizzie Diamond, who's going to talk about it. This to me is like complete organized chaos. It's more of the, in a good way. I mean that with all, with love. And for me, it reminds me more of like a jazz, like improvisational ensemble where the actually the structure is, there's actually a lot more planning that goes involved. But there is kind of everybody is involved in creating the space. So and in both of these kind of structures, and especially in map time, everyone needs to know how to use tension effectively, constructively, powerfully, and openly. So just going to dive into what both of those structures mean and then introduce the group. So GONYC, from my point of view, and there are many other organizers involved in GONYC. My GONYC, my version is that it's designed to bridge, to challenge, and to really celebrate the various mapping communities of New York City. For people who build technology, it's a way to get our heads out of our code, you could call it code, you could call it body part. And but get your head out of your code and think about some of the bigger questions and some of the bigger context that's happening in the world of maps. And for people who might be looking at it from a more abstract point of view, it's really bridging the gaps to meet practitioners and learn how tools are being made and what the major questions are. So when we curate these things, it's really important that we have both of them in the room. So three talks all from technologists is really boring, looks like a blue screen and is not really going to broaden a community. But having technologists next to thinkers, next to people who are going to challenge, make the kind of relevance of the technologies that you're building, this tension is really constructive to creating dialogue. So I got involved in February 2013. Since then we've had 17 monthly meetings, we've had 71 different speakers, we're almost at 1200 members. We have six sponsors for that are particularly active, CardoDB, Esri, Boundless, and Mapsen. All groups that are coming at mapping from different perspectives, but all see the value of having an open, constructive dialogue. And we've been all over the city in 10 venues, including Brooklyn, which I've heard is pretty hip of us. In terms of curating, again, this structure, and you can even, I mean, you really can think of it like music, right? You have like some major things going on, the trumpets are going wild, you know, and then you have like the subtleties of a rhythm section. And so in G1YC we try to get some of the big players that are in New York City, like Google and Facebook and Twitter, et cetera, et cetera, all those ones that you know, juxtaposed by the some major thinkers, some activists, some artists, academics, and governments talking about similar topics. So you play with the structure, and people saw me kind of like meandering around the room, which I'm sure like some people out, but like I really approach it as building a dinner party where if you invite somebody into your home and people are eating, that everybody matters, and everybody is comfortable, and everybody has like a voice in the conversation. Map time started last summer with Stamen, at Stamen, and I don't mean I don't want to put words in their mouth, but from my point of view is really a response to some of the difficult, open source mapping communities that exist out there, like the impenetrability of understanding like the cultural mores. If I said that right, it's French, right? And they wanted to create a space that was open, that was inclusive, that was designed for beginners. And so from my point of view, it's a place where also everyone matters, and also everyone has something to build, something to teach, and something to share. And so they're much more intimate gatherings. They happen on a more regular basis in New York City, and it's about hands-on learning. And just like an improvisational jazz where everybody has to be a composer, in map time there are no passengers. You can't just sit back and absorb information from the map time New York City point of view. Everyone is driving. And Lizzie is going to talk more about the excitement I think that we're tapping into, that I think people want to drive the future of mapping and touching and creating and being involved with the next generation of tools. And there's a lot going on this week, including a party tonight, to help celebrate that. So I'm closing up, I swear. I want to be clear that these are no-way perfect spaces. These are constantly evolving. They're messy. They're unpredictable. But at their heart, and I think when building spaces that are diverse, intention, motivation, like who you're being, like matters a lot. And so these spaces are designed around embracing tension and creating it to be constructive and feeling comfortable when things don't always feel good. And from my point of view again, this is the difference between having one color of paint and having the tension of creating many colors together. And that's the basis of art. That's the basis of love. That's the basis of laughter. It's how we learn. And it's really what I encourage you to embrace when thinking about diversity. So with that, let's be awkward together. And thank you. I'd like to thank the DONYC community who has inspired me so many times and is the reason I am able to do this kind of curation each month. And thank you to our sponsors, especially Andrew Hill, who does a lot of the news section of DONYC and the curation, but also Rolando Panate. I'm going to add that to Spanish. See, I'm bad with the other languages. And he, from Balnes, Nick Furness, from Esri, Anthony De Nero, from Mapzen, they're all the organizers that help create DONYC and make it happen. And to Corey Holmes, who I have never met, but is apparently my flicker soulmate and responsible for all these beautiful imagery. And with that, I'd like to introduce our speakers who are all going to riff on the topic of diversity. So first up is Lizzie Diamond. You might recognize her from Izzy Lizard's Diamond Band or something. And anyway, that was what the intro slide said. She's going to talk about map time and how you might get involved and what it represents, et cetera, et cetera. So here you go, Lizzie. Take over the mic, and I'll get you your slides together. Hi. You should take this moment if you haven't yet to get food and drink and stuff, because it's delicious. And we're in Portland, man. When in Portland, eat things that are terrible for you. Thanks. So do I have to do a thing? Oh, yeah. I don't know. This is where we banter. Oh, right. We're bantering. I can't actually banter as well as Alyssa does. I got a chance to go to GONYC a few weeks ago, and there was some discussion of how awkward can you really be when standing, trying to banter at a microphone. And I think, there it is. I don't know. That was like, I think that was like a two or three on the awkward scale from like one to awkward. Yeah. Wait, one. Yeah, you got it. Okay. Yeah. Cool. Done. After you. Thanks. I'm Lizzie. I just spoke at another talk, and so my heart is still beating kind of fast. So I'm a little shaky, but I am a fellow at Code for America. Yeah, that's right. And I help, or I run Maptime Oakland, and I also help the lovely Beth and Alan manage Maptimes. So I'm going to talk about Maptime for a change of pace. So what is Maptime? Maptime is quite literally a time for learning about maps. It is hands on and beginner focused. Those are pretty important things about Maptime. And there's an emphasis on open source, programming, and web mapping. Basically, it's like hanging out with all your closest friends and doing something awesome that you all love and care about. So I wrote this in a blog post. So Maptime exists because community, inclusivity, and accessibility are important and necessary components of positive learning experiences. I think that this is sort of the key to all of this. It's fun to hang out and learn with your friends. It's easier to learn when there's other people around. But it was so hard to do this before. And now with Maptime, it's a lot easier to leverage the power of your community. So at State of the Map in 2014, Beth and I did a Birds of the Feather session. There's going to be another one that Alan and I are going to do tomorrow as well. But in that room, we brought people together and talked about what makes a good Maptime, what are some challenges around Maptime. Going into that meeting, there were four Maptimes. And now there are 25 Maptime chapters. As in the last six months, we've exploded. There are 25 active, have had a meeting or have a meeting scheduled, plus another 15 that are in incubation, including Johannesburg, South Africa, which is really exciting. And you'll see prominently displayed there in the bottom right corner of that image is the flagship Maptime, Null Island, which is the flagship Maptime chapter. So, at this point, you're like, oh, my God, Maptime is the coolest thing ever. I want Maptime in my community. Why is there no Maptime? I want to start it. I'm ready. And I'm going to tell you how to do that. Oh, whoa, that was a sneak peek. So the first step is to find the community. I think that Maptime is valuable because it touches people on the fringes. It's not the, you know, I have no interest in technology. It's the, I'm a GIS analyst and I'm curious about web mapping and I don't really know how to get started. Or I'm a developer and I want to make a map, but I don't really understand what a shape file is or what a projection is. Now, these people with just a little bit of encouragement can kind of be brought into the fold and start doing really awesome work and be a great part of this community. So you have to reach out and find them. And, you know, they're typically GIS list serves, tech meetups, that sort of thing. The second step is to kind of hammer out the logistics. Where are we going to meet? When are we going to meet? How are people going to get there? Are we going to have food? And that's where Alan and Beth and I come in. You can email us hello at Maptime.io and we can kind of help you get all that set up and bring you into the Maptime organizer organization. That sounds weird. Make sure that your event is inclusive, interactive and appropriate. If you have a whole bunch of developers in a room, doing an introduction to JavaScript is probably not going to be a great talk. Similarly, if you have a bunch of GIS analysts in a room and you're doing a, you know, what is a map, probably not that appropriate. But you also have to think about things like jargon, assumptions. What do people know? What are people supposed to know? And those kind of things sort of think about like appropriate language and the way to make an inclusive environment. And Maptime has a bunch of tutorials that are good to start with. One of them is called Anatomy of a Web Map. Kind of talking about like what actually makes the Web Map. Another one is OpenStreetMap 101. It's a great one to start with because you can kind of do things right away. And then there's also for those who are really excited about Geo, introduction to geographic data formats is my personal favorite. And all of these are on the Maptime GitHub. So you can go and grab them there. These two magic sentences should be included in every meetup. And every Maptime is excellent about this. And we really appreciate that. But please bring a laptop. Beginners very welcome. Reaching out specifically and saying yes, beginners, please come is huge. Really, really important. And it does bring beginners out who might not otherwise come to a meetup. Maptime has many different models. At Maptime NYC, they do more project based. Maptime Oakland, we do more lecture based. And kind of a spectrum in between. So there's lots of different models. And you can make it work for your community regardless of what your community wants. Maptime is an important idea. It's a meetup groups. But it's an important idea. It shouldn't be so hard to learn. It's like really hard to learn. And we have the power to make it easier and teach ourselves in the process. So why wouldn't we do that? Also we have stickers of various varieties. There are these ones. And we also have shiny rainbow ones for badges if you are into Maptime and wanting to talk about Maptime. So come and talk to us. Check out our website and GitHub and email and come to our party tonight, 8 p.m. And thanks. Bye. Does anybody have any questions? Who did you? I'm just observing that this conference is huge and there's few people here at the plenary and this conference also this group is predominantly female and I've been in rooms where I've been in the only woman. And I'm just wondering how, just noting that. Not really a question. Excellent observation. Can I respond? Oh, yeah. Well, I like to think that there's a lot, I mean there are eight tracks and there's a lot of really exciting stuff going on in those other tracks. And for people who want to learn more about technology, it makes sense for them to be going around the tracks. I think that what we see here are people that are, and I don't mean to speak for everyone, but the people that we see here are the ones that are kind of leading discussions and really paying attention and being aware of the communities that are making those technologies and want to participate in this conversation. So I see you all as leaders and that's who we want to be talking to right now. That's a great answer. I agree. Plus one. Yes. Is there an official map time cheer? Is there? Is there? He asked if there was an official map time cheer. That wasn't like a leading thing. I mean we can make one. Not right now. It's too much pressure. Oh, yeah. At the party tonight. Come to the party and we will reveal the official map time cheer that Jason's going to make up between now and then. Yeah, come talk to us. And I just like to keep saying that in MAP time NYC we use the terminology you've been map timed or I've mapped time someone. We think it's a verb disguised as a noun. Maybe even an adjective. Yes. Yes. So we have a group up in Seattle that's very outstanding. Go to the mic. We'll also repeat the questions. People are feeling lazy. Sorry. So in Seattle we have a group that's filled with more intermediate to advanced geo users and we get beginners but I believe they don't come back. Which is kind of expected if you're not comfortable around a bunch of people who are talking about things you don't know what's going on. So as someone who's interested in talking or starting a map time Seattle, how would I convince my cohorts in the current group to stay focused on teaching beginner stuff instead of taking the conversation into more intense details. I think that those are moments that require a little bit of leadership on the part of the facilitator. If you notice the conversation perhaps going off in a direction that you didn't intend if you're on my last talk then you know what I'm talking about. Then you have to try to bring it back and focus it. You can also talk to the people ahead of time and make the intentions very clear of the meetup. This is intended for beginners. These are our goals. I know that Jason has spearheaded an effort for a map time code of conduct. Those are things that could potentially be addressed through such a mission, vision, and conduct code kind of deal. You can just go up to the mic if you have a question. Okay. I think this is really great and it was cool to see the map of places where you've done this. You said the goal is to reach the fringes. So I notice the map only covers right now like the US, Canada, Europe. What are some of the visions you have for the future of expanding into other continents or areas that may not even be very represented at this conference and may be translating some of the intro materials that you've put on the GitHub site? Well that's interesting. So Alan and Beth and I don't start the chapters. It's people in those communities who start the chapters. So it's a matter of reaching out to those communities. Just yesterday a gentleman from Spain reached out and said that there is a group in Spain and in South America called Jio and Kietos, which is basically, it's like a very similar to map time. And so we're going to talk to them and building a partnership there. I think it's just, it's been very organic up to this point. And at conferences like this we have the opportunity to kind of get the word out. But if we were to reach out and start groups in other communities where there isn't a local champion to kind of make it happen, then it would just fall flat. So I think it's a matter of communicating broadly and bringing more people into the fold as it were. But it's a good call. And as far as translation, there's a gentleman in France who is starting a chapter in the Alps and he is interested in starting to do some localization and translation stuff. So yeah. I'm going to leave now. Okay. Improv. All about the improv. Yeah. Was that awkward? That wasn't awkward. I'd like to introduce now Liz Lyon. She has big dreams of touring with a Geobus that hopefully she'll talk about later. She's from DC, which is weird, and she's going to talk about geography and kind of like what it means, the identity of what it means to be a geographer. So take it away, Liz. So these are my colleagues. But more importantly, part of don't drink my coffee either, please. Just drink yours. Okay, that one's mine. So double-fisting is important. I think part of this has actually started to gel from a lot of conversations that Alyssa and I used to have while drinking a lot of wine and trying to reach to illusions of self-actual arity. Is that a word? I don't speak English very well sometimes. And really thinking a lot about identity. And what does it mean to be a geographer? So I've been doing this geography thing for a little while. I actually became a geographer when my family moved us overseas in the seventh grade. And it was great because we had geography classes. And so if you are a U.S. student of public education, you didn't get a geography class. And I think that's a really big disservice. And I didn't really understand that. And then I went to college and I didn't become a geographer. I became an economist. So that's why I went to grad school because I wasn't qualified to do anything. So I decided to become a geographer. So through this journey of trying to figure out, well, what's the identity of a geographer and advocating to be geographers or for the science, for this discipline of mine that I love and want to be in for the rest of my life, hopefully. That is a long time. I think a lot about what's the identity of a geographer. And somebody said something really important to me a couple months ago. And they reminded me that the concept of geography is a 19th century concept. It's an age of exploration. It's these old guys. They were educated. They were connected. They sat right next door to decision makers, to kings, to queens, to generals, to people who were founding nations and going off into the wide unknown. And they dedicated their entire careers and their lives. A lot of them weren't married. They didn't have families. They didn't have a lot of distractions. All they did was make maps. That was the geographer of the old. Also, if any of you are in the academy or if you're in grad school right now and you Google old geographer, in your first hundred hits, it's really kind of entertaining because you do end up getting some hits from your current professors. And that's kind of weird when your current old geographer professor shows up right next to the Vermeer painting. I'm just saying, I'm not going to say who they are. Kansas State University. Anyway, moving along. Okay, so if you are of a current generation and where we kind of think about what does it mean, and so now we're moving into this whole mapping thing. When I got my degree or when I was going to grad school, the first time, and I said, I'm going to go get a degree in geography, the first thing people said was, isn't everything mapped? I was so tired of that question. But the answer is, well, I guess so. But then the answer is, well, no, not really. And I mean, there's a lot more that can still be mapped, but we're deriving that. In terms of Rand McNally Atlas, yeah, everything's mapped in the Rand McNally Atlas. And I have like every single edition since 1981, and it makes me so happy. People actually give me their archives of the Rand McNally Atlas. We grew up with Oregon Trail if you were a student of the 80s in the United States. And you remember going to the library and checking out the Apple Macintosh computer for 30 minutes, I fought with my brother about this one a lot. And we're in the world of Carmen Sandiego. These kind of built into the identity of a lot of folks who are here in terms of the gamification of geography in terms of some of the new mapping applications. They all were a foundation. Oh, and Goonies like treasure maps. I mean, seriously, coolest thing on the entire planet. So where does that all of that form? And what does that all become for me? Well, what's my identity? I'm so glad that GIFs come through because I wasn't entirely certain that would happen. But it became a combination. And I also realized about 25 minutes ago when I was not looking at my slides ahead of time. But unintentionally, these actually are all with the one exception of the one that I created of myself, my Bitstrip avatar. But they're all of women. And that was actually kind of cool for my realization because I'm like, oh, these are some of the things that build into my identity of being a geographer. I throw up Liz Lemon in there because that's actually something that I have not embraced. So my name's Liz Lyon. And the number of times I get called Liz Lemon is a lot. So I'm slowly learning to embrace that also as somebody who's watched 30 Rock like maybe twice in her life. But I really love Tina Fey. And I love the characterizations that Tina Fey has created and the model and the role model that she's become. So it's a journey. And I think one of the best things when we start thinking about identity is embracing what we know and the pieces of the journey along the way. I threw up something on the corner, the classifications and qualifications. So Alyssa mentioned it earlier in my day job or in my life. I live in D.C. and I'm actually classified. Somebody said, in real life. So we have to throw in an acronym since I work for the government. I actually am a geographer. Geography series 0150. It hasn't been updated since like 1958. It's kind of awesome in that way. And I'm working on changing that because it does need to become relevant and it needs to move to the 21st century. So what does geography look like today? And geographers look like today. The best thing that's happening in the community that I think is so phenomenally exciting is geographers, the identity of people who are making maps is in the community. It is local. It is kids. It is projects all over the world from working in universities, working in slums, working in developing spaces, working across industries. And then there's also a bridge. There's a gap. There's a divide. And that bridge is a little bit of a question mark. There's things that are happening, spaces, communities that are starting to bridge in between what is also happening in the corporate. Google is doing a lot of mapping work. Esri, certainly. Digital globe. Lots of companies have these big business models that are focused on mapping. So we have local and we have corporate. So how do you start reconciling the two between the old, the new, the corporate, the local? Well I don't really know. And so I just came back from hanging out in this place. And it was probably one of the most inspiring things because I was in Saudi Arabia and working as a woman in Saudi Arabia is a whole other conversation. I will not be driving the Geobus in Saudi Arabia this year, but maybe someday in the future. But what was great about being there and interacting and seeing the, this is called the edge of the world. You're looking out from miles upon miles and there is nothing. And then you think about the differences in the opportunities in a place that is very different from perhaps what I saw and grown up with in the US. There is some phenomenal potential. And there's really an opportunity to jump off of the edge of the world for all intents and purposes. By the way, don't try jumping off the edge of the world. I don't recommend that. Because I don't think there's any emergency flight find you thing. You're pretty much just dead if you do that in Saudi Arabia. So I did not do that. Clearly I'm alive. So anyway, all that is to say to sum up is what is the future? Well, okay. So Alyssa just asked the metaphysical question, am I really, really here? I don't know because somebody gave me donuts and I should go run a lap. So with that, I'm going to leave those parting thoughts. If you're ever in DC, I help organize the GeoDC meetup. And we love to, we are a very techie oriented meetup group. And so we like to explore other topics as well in addition to some of the technology talks. We're always in the same place. We like habit and free space. And free space in DC is hard to find. So you find us the first of the month, first of Wednesday of the month at Stetson's in DC. So that's it. Questions? Please, no. Well, you can ask questions. I really will run around. That's a nice guy's popping. Other questions? We're going to try some tech changes now, Josh, you. Yeah, do you want to? Wait, I'm checking my email. This is when you do Hokey. Yes, as Alyssa said, recovering. Things are going well for me. Recovering Hokey. Hokey addict? Yes. Portland is a great place for it. Let me get a sense of, I haven't been part of the conference yet, who here is a developer? Developer, okay. And what kinds of things do the non-developers do? What kind of things do the non-developers in the room do? Are you managers or policy people or you map? Okay, what do you map? So you make maps for something? Okay, cool. Okay, I think we're ready to go. I know I have no idea how to do this. So I'm going to add two calls to action. I'm going to give you my story in maps and what I've noticed in my short time there. I can look at the slides. Okay, so my name is Josh Lifton. I grew up in Idaho. Thank you. Not too far away. The blue hoeing or growing up in Idaho? Yeah, they both happen a lot. And when I was a kid, it's not me, it's just some random internet kid, I, in order to, you know, stay off insanity, I became a magician and I spent an entire summer, I landed a gig by doing one magic trick for like the local cobble of Illuminati that controlled the county fair. And so I was allowed to go to the county fair and perform my magic tricks. I had to learn magic tricks by the time the county fair happened, which was in the fall. So I spent the entire summer reading books and learning magic tricks. And really what that meant was I spent the entire summer in front of my bedroom mirror, because that's what magic is all about. It's all about visual trickery. And doing a magic show in front of a blind audience would be very difficult because they're just, they're not going to be tricked by your things. It's almost always visual, the tricks. And this is, it was very useful for me in many ways. And, you know, fast forward many years and I find myself working with a small startup called Storm Pulse, which is a mapping tool for tracking risks and hazards, right? They just take public data, public weather data and risk data from NOAA and other resources, and they just put it together in this nice interface. These are all the devices, I just grabbed this off of their current webpage. But they're advertising, you can look at the entire world, see all the weather risk to your particular assets, whether it's on the coast or whether it's a train line or a warehouse or a retail center, and get alerts for them and whatnot. So this, you know, started out as the founder, I came on as kind of a late third founder, but one of the original founders, he was doing it because he lived in Florida, he wanted to track hurricanes. And so he built this little application in ActionScript 2 to map out hurricanes and get all the data in one spot for all the weather he could about this hurricane. And then people started using it and he got a lot of interest in it. A lot of people were the weather wonks, people he just wanted to know about hurricanes. But then he noticed there's actually quite a bit of.mil traffic going to his website. So a lot of like.gov traffic going to his website. And you fast forward a couple of years and this is Barack Obama looking at the Storm Pulse. And I got to learn a couple of insights into who exactly was using this. You know, when they were struggling with their subscription model and they had like a $50 a year subscription and they changed that and they kind of left a bunch of users along the wayside and they were trying to focus on corporate clients. And they get a phone call one day and it's the somebody, it's like, hey, this is Bob at the White House Situation Room. I'd like to renew my account, please. And that just kept happening over and over again. Like they'd see it on CNN. Like there would be their piece of software with the world map on CNN as one of the background screens. And they learned pretty quickly that the primary user, like the power user, what they really wanted was a massive screen, right, the biggest screen in the room, in some dark underground room and lots of terminals with people like typing diligently at them. But they wanted one huge screen that was bigger than all the others that had the map with all the information on it, right? And that's what they were using Storm Pulse for. So this, you know, this has made me realize that if you combined, you know, that Joseph Campbell's power of myth with Edward Tufty's visualization of information, you get maps, right? And really a map is an illusion of power, right? And it's what people hide behind when they're lost, right? What people want to show everyone else when there's something bad happening. And they may not even use the map, right? Like I don't know how many people were using it. We never heard stories about people using Storm Pulse. It was always about Storm Pulse being there on the screen and giving that comfort and that sense of power, you know, and that everything was safe. And I think that's really true, right? Because when you, a map is in some ways the first, like, visual augmented reality, right? The first really dated visualization, I bet, was a map. And our brains are just so heavily visual, right, that we can't help but feel safe and comforted by these things. So that kind of brings up the question of, well, for you all out there who are making maps and who are coding and make better maps and whatnot, how much are we just tricking ourselves and how much of this is truly useful, right? So that's one called action. Just keep that in mind. Like, is this really truly useful or does it just look good to die with the writers who want it on the wall? And then the other called action I have is, you know, there's the illusion of power behind maps. There's the illusion that maps convey of power. But then there's the actual power, right? And behind every joke is somewhat real, and so every illusion is somewhat real as well, right? So who's using your maps and your software? This is the, I assume, FOSS, the free and open source software, right, for GEO. Is there, I think there's a really vibrant open source community, but then how is that being used in, let's say, by whatever metric you want, like the number of servers running it or the number of page views or whatnot. So I think tracking that is also really interesting and finding out where the nexus of power are, kind of mapping the mappers, I guess. That's about it. Thanks a lot. APPLAUSE Are there any questions? For those keeping track, we're going to go past four, but we have one more speaker. First of all, thank you, Josh. I'm sorry. Thank you. We have one more speaker, and then I encourage you to, like, stay around or go to the next talk. The next person is coming from Public Labs. Matthew Ipincourt, does that right? He is relieving me of my technical duties and will be performing this with paper. You might be familiar with it. It's where maps were originally designed. No, I mean, after the cave drawings, but before the web, paper. Later, later we'll be playing paper rocks, scissors that might be involved in the map time. Cheer, still unclear. But Matthew, please take it away. Should I unplug? Sure. It's going crazy here. Has this happened before? I don't know. Okay, here we go. Yes, I printed some handouts for all of you. I know one person in the audience has seen this talk, a bit of this talk before, but I'm going to maybe, Jacob, you help me do some handouts. So with paper, if you take one, then you pass it. I also have paper maps later. If anyone wants to check them out, you unfold. Right, so in my day job, I help train a lot of people and a lot of different people to make ortho, photo, mosaics, using kites, balloons, poles, really low-tech hardware and some high-tech software. That's not what I'm going to talk about today. I'm going to talk about talking and how to actually get to hear what people of other backgrounds have to say, because I think that that's not something that a lot of tech people, myself included, can often be very good at. Right, so this is a talk about not always doing the right thing in diverse communities. I talk too much. I give talks. I'm asked to give talks because sometimes I give good versions of talks. This is going to be a good version of this talk because I practice and I have notes and I wrote everything down. So even if I screw up, it'll probably be a pretty good version of this talk. But sometimes I give people bad versions of talks, like I'm just mouthing off and people don't tell me to shut up when I don't know what the heck I'm talking about. And that's partially because I speak with all of the affectations of intellectual authority that we assume people will have when speaking in the United States. It's accumulated through my upbringing and my education. And I'm able to project my voice into space and space is made for my voice, because I have the full force of formal and informal institutions and relationships that form a web of, a protective web of prejudice towards me in this country, because I'm a man in a wasp in America. So racism, sexism, classism, and heteronormativity are all forces that help put me on a stage like this. And let me talk to you and also talk over people in conversations. And I do. I talk a lot. It can be bad. Sometimes I'm just talking to someone else and I'm giving them some like bad version of this or some other talk of mine. They'll just like seed space to me. And that's helped me become a better public speaker because I was raged and encouraged to take up space. And because other people are obligated to give me space and because my culture is dominant and the ingrained prejudices of it affect the perception and reception of the things I say. And this is a component of why you're listening to me. Like I'm actually performing privilege right now and we are reproducing structures of power in America right now. And so that's privilege. I didn't earn it. It's not something I do. It does shape who I am. And I can't apologize really for it because it's not my fault. But I can apologize for my actions. So I mean I talk too much. I'm sorry. I'm working on it. And I hope to do less talking in the future. And I gave you actually my guide for myself to talking less in the future. It's a system for making sure you take up a fair and equal amount of space with other people and provides a quantitative and qualitative rubrics for assessing whether you're succeeding at sharing space. So the first issue would be like share time. If you walk into a room of people, divide the number of people in the room. By the number of minutes you're likely to hang out. And that's like that's my talk time. And trying to say one thing at a time really helps there. Take three seconds to think after someone finishes talking. And for me that's because I need to actually catch up to the end of someone's sentence and listen to the whole thing they said before I speak. And I'd also turn it around to myself and give my thoughts three second spaces so someone could interrupt me between my singular thoughts. And then a big one which would be finding empathy which is trying to understand what someone feels. And not necessarily trying to reconstruct what it might take to make me feel the same way. Just trying to feel the intensity behind someone's words and trust that feeling. And then realizing that understanding isn't necessary. So no one owes me an explanation for their life. It's okay if I don't get things. Don't have to understand everything. And those simple guidelines you could abbreviate STFU which I'll do in a little second. It could help you remember the entire system. And this can be really helpful if you're like trying to understand what your users think. Or not talk over people in a room or have an equal conversation. And so I'm especially sorry for all of the things that I could have learned if I hadn't shut up more especially in the past. And you know it's really nice when people call me on my bullshit. I will tell people you can tell me to shut up. But that's not something anyone's obligated to do. And you may wonder why I actually have done this as a form of an apology. It's because I was trying to structure and explain the form of a proper apology for when I do actually screw up and have to apologize. Which is that there are actually four components to apologizing for screwing up. Which is this is what I did. This is why I'm sorry about it. This is what I'll do in the future. And then this is the effort I'm willing to put into change. So those two systems like sharing space and then apologizing when I screw up are really central things to actually figuring out what other people have to think. Which is important when I'm trying to build technical and policy systems that affect other people's lives. And I think I just did that talk in two and a half minutes which is slightly more than the amount of time I would be given to speak if all of us were to talk equally. But that was my talk. Thanks. Does anybody have any questions about listening? Was that awkward? That's right. You're thinking three seconds before responding. Which could take a couple minutes. So I'll just sit here. Are there any questions? Any feedback for the speakers or for the structure? Yes, Kate. Please come to the mic. I've discussed this with two fifths of the panel or at least approach them. Not the two fifths I've lived with. Ooh, sexy. Well, there's a Venn diagram of people that are both. And I think, I mean, I know you were trying to recreate GeoNYC. We're at an international conference that happens to be in the United States. And I think everyone here is an American. Yes. Or to be more inclusive from the United States. I mean, we're all from the Americas. So if you were to do it again, would you maybe approach it? Well, pardon me. Yeah, this is a really good question. Part of the curation was actually to find people that were local. And local that were coming outside of the Phosphor G community. So I guess two fifths of the panel are not within the Phosphor G community directly. I haven't lived with either one of them. But I think that's a really good point is that I wasn't really taking into consideration like the international perspective when trying to curate the topic. And I think this is something that I think challenges like the Phosphor G in general, like the open source community, is to get more of those voices included. So we should plan this again for next year and work together. Yeah, I think it would be great. Gender diversity and technology is very important to me as a woman, but I think also as someone who teaches PhosTools all over the world, it's important to think about all the aspects of diversity and how we even approach that. I thought the comment was interesting about what would magic tricks be for the visually impaired. And I'd be interested to know what that exactly that is. You could do a magic trick that's not visual. And what's a map mean that's not visual? I mean, there's a lot of work towards that. Thanks. Any other questions? Yes. I'll get up here. Hi, I'm Dana. I've got a couple embarrassing things to say. That was a great topic about the gender diversity. I even made a quasi joke with my friends that I fit in well. I went to another conference without a badge and snuck in. I said, look, I'm a nerdy white guy. I could fit in here. And all I saw was nerdy white guys at my table. And so I think both the issue of some of the STEM work that's done, and I think this mapping stuff that you're doing with these communities is a good way to bring mapping to a more diverse group, especially with the younger folks in our schools and whatnot. So thanks for the diverse talk. And so the second embarrassing thing is I wear pink underwear. So I guess that makes me weird. Not women's underwear, just pink underwear. Thanks for letting me express my weirdness and thanks for the good work. Okay, so let me understand this correctly. You wear pink underwear. It's a little echoey over here. Oh, sorry. Yes, you were asking earlier what makes us weird and I raised my hand and you said why? I wear pink underwear. That was an excellent, excellent way to close out this session. So thank you all. Thank you. Somebody tweet that, please. And yes, I encourage you to carry on these conversations here within like the context of Phosphor G and the communities that you're a part of, and in the cities that you come from, and let's continue to be leaders in this space. So thank you very much. Thank you organizers for inviting me to speak. And I hope to see you again soon and help finish the donuts. Thank you.
|
<p>Diverse communities provide the space for different points of view to find voice. Historically open source communities have balanced the contribution of various perspectives and expertises. We are often industry examples of remote cultural collaboration. But the nature of collaboration is changing, where diversity must stretch further across geographies to foster a wider scope of difference. One that includes the other sides of privileged space. In this session, I will present on why ideological diversity can be at the forefront of community structures by introducing three personal cornerstones - Mapzen, Maptime, and GeoNYC. This interactive session highlights how embracing a range of cultural perspectives and technical expertise allows communities to create the unexpected. We'll review success and challenges while performing our own mini GeoNYC complete with 3-word introductions and mapping fun. </p>
|
10.5446/31624 (DOI)
|
Now we are ready to begin with Fletcher Foti. Foti. Foti. Either way. Urban Sim to simulating the connected metropolis. OK, so I sort of, when I found out what session I was in, I sort of reframed the conversation a little bit. And specifically, I want to go off script a little bit to frame what we do in the context of what's already happened in the session. So it doesn't seem like it comes out of left field. Because in fact, we do exactly the same thing that the previous session was talking about. At the core of what we do, we have statistical models where we're trying to predict prices or where people live. And we use variables that are very similar to the statistics that GeoTrailist Transit was talking about. So something population reachable within 15 minutes, 20 minutes, 30 minutes, average income, all the sorts of statistics that you might normally get out of GIS. But using distances along the network, rather than within shapes. And I just think it's interesting there are actually people behind these open source projects. There's a history here. So OpenTrip Planner came from Brandon Martin Anderson. And it's written in Java. And that's been going on for three or four years. There's actually another one of these called Open Source Routing Machine, which came out of a university in Germany. And Dennis Luxon is the person who started that project. And he now works for Matbox. And we've actually been working with Dennis for three years back before he worked for Matbox when he was just a grad student. And have been integrating our code with Open Source Routing Machine for years. And have finally released it a couple of weeks ago. And it's now fully accessible in Python, even though Open Source Routing Machine is written in C++. So we'll come back to that. But I just wanted to frame the discussion here. So who am I? I am a PhD in city planning. I am a newly minted PhD. I got my diploma in the mail on Thursday. Yeah, thanks. All right. Yeah, very exciting. I'm glad to be done. Unfortunately, this is being recorded. So my advisor is probably watching. My research was relating walk score to home values in San Francisco. Spoiler alert, since I know you're not going to read my dissertation, that I found a relationship. People are increasingly preferring walking in their homes. I am now a chief data scientist at a company called Synthesity out of Berkeley. And I actually am a Portland expat. I used to own a house, a block, from here. So I'm happy to be back on home turf. Outline in my talk, I'll tell you what UrbanSim is. Give you a case study of an application of UrbanSim. Discuss our Open Source stack. UrbanSim is Open Source. It is BSD licensed. We are our Open Source stack. I live and breathe our stack. And I will get to the point, which is we have pivoted some technologies, I hope, out of UrbanSim in a way that is accessible to other sort of PI data programmers, some general functionality that you might find interesting. And I'll talk about that. I'd also like to sort of coin this term, Urban Data Science. There's not really a term for this right now. But there's a lot of stuff going on around data science that is specific to behavior in cities. And I would like to create a community around that sort of thing. And I don't expect that I'll actually have time. But if we have time, there's a demo at the end. But it's all in GitHub, the examples, demos, that sort of thing. Pinterest is not just for food and clothing anymore. I keep all of my favorite resources on my Pinterest site. It's very visual. It's very well organized. You can follow along. Everything on that site is, I will answer questions on any of that. Feel free to ask on something I haven't talked about. So what is UrbanSim? UrbanSim can be thought of as retirement planning for cities. So 30 years from now, most cities, perhaps Detroit and Cleveland accepted, expect to grow by vast amounts over the next 30 years. And are we going to be able to house all those people, all those jobs? What are the impacts going to be on transit, on traffic? How can we, are cities going to function when they're twice as large as they are now? And how do we keep them functioning at as high a level as they are now or improve their functioning? So UrbanSim is a set of statistical models to forecast changes in population employment and the built environment over 30 year periods. More specifically, it is an agent-based simulation of regional real estate markets. So what that means is we capture individual households, individual jobs, and the decisions that individual households make, the decisions that employers make, the decisions that landowners make. And each of those things are disaggregate objects that are allowed to make their own decisions. And the behavior sort of emerges out of that. Clients who are primarily regional governments right now are very large cities who are planning for the future of their areas. It's been around for 15 years. It was originally written in Java in 2000 and was re-implemented in Python in 2006. And we recently re-implemented it this year on new scientific Python tools, specifically Pandas. When I first started using Pandas, does anyone know Pandas? So not everyone. So Pandas is the data analysis library in Python. You can now do close to, well, I'll say, everything I need to do would have needed to do an R. I can now do in Python with Pandas. So all sorts of statistical analysis is now available in Pandas. I can do database-like things, group buys, aggregations, filters, evaluations. All that sort of stuff is now available to us in Python. And it's gaining in popularity all the time. Now it's up to 120,000 downloads a month. When I first started using it, it was 10,000 downloads a month or something like that. And Irminsen was created by Paul Adel, who's my advisor. He's chair of the city planning department at Berkeley. This is his slide. From a theoretical perspective, cities are really complicated. And Irminsen has the goal of modeling all of those behaviors. So there are long-term choices on where to live and locate, short-term choices on how you're going to spend your days. There are developers who build things. I'm going to trust you guys. And I'm going to talk more about how it's actually implemented than what it is from a theoretical perspective. I think you can do it. Let's try this. So essentially, Irminsen is a set of statistical models that run one after the other. At the core of it is these network-based variable computations that are similar to what they were discussing in the last session. And then we use those as variables to understand how people behave. There's a residential side. There's a commercial side. On the residential side, you might be very familiar with it because you might have bought a house and you made trade-offs on what you were valuing and buying a house, this amount of money for another bathroom, for a larger lot, for the good schools, that sort of thing. All those are trade-offs that you can capture in a statistical model of home values. That would be the residential price model. The location choice model is where you chose to live and what you're trading off and where you're choosing to live. The transition model is increases and changes in demographics of people in different cities. So your population might double and the population might get younger. People might get married later. All sorts of interesting demographic things are not happening now and are happening all the time. There's another set of models that are on the commercial side. I won't go into that too much. And then there are real estate developers who make choices about when to build new buildings that are largely informed by the decisions that we all make. So when we drive up prices, developers see that and they start building more things. UrbanSim is a scenario planning tool. What that means is I have this future with this set of policies. I have this future with this other set of policies. I'm going to create some outcomes. Which one do I like better? Inputs are things like fees and subsidies. In Portland, we have an urban growth boundary. There are incentives to live in more dense locations. There are road networks and transit networks that are changing. The orange line is going into Portland in 2017, I think. I think I might have that day wrong. There are road pricing not in the United States, but in other places. And there are parking policies. Things like parking pricing is now being used in San Francisco. And then you get out these triple bottom line metrics. Economic, environmental, and social equity considerations. And we've done full scale simulations in Paris. This was for a billion dollar investment in transit. And Planned Bay Area and our home turf in the San Francisco Bay Area. The very absolutely interesting project. Spent a couple of years putting together a simulation, took it for public comment, and then the Tea Party went nuts on us. It was absolutely, I encourage you to go on YouTube and look up Planned Bay Area Tea Party and see some of the comments that came back. Absolutely fascinating. I'll leave it at that. So the Bay Area case study is one I worked on personally. I won't go in too much detail. These aren't my slides. These are the regional planning agency speaking from the results that we produced for them. Largely, the regional task is to reduce greenhouse gases. They're going to do that by making more dense built environments. The goal is to put 80% of new homes on 5% of the land. Those are the pink locations there. And then you get out a ton of more specific numbers. Things like mode share, like population growth, like prices that are increasing, and so forth. Feel free to peruse this in more detail. I'm not going to go into this stuff in detail. Just note that there are five columns. There were five scenarios that were taken to the public. Sort of a no project, a transit priority, social equity. And they sought public comment and then went with the preferred scenario. So that's urban sim. But so I hope that gives you a flavor of what it is. Obviously, it's very difficult to talk about it. But what I really want to talk about is our experience in implementing urban sim within this open source community and talk about our open source stack. This is my favorite image of all time because PANDAS is the library that I use every day and my line of work. And this is a panda taking over the Empire State Building, which was the cover of the Economist making a statement on the Chinese economy taking over the American economy. I wish it were higher resolution. I've never seen a screenshot that was high enough resolution, but that's sort of symbolic of my life. I don't know if this is true. I hope that it's true. I don't understand why anyone programs in any language besides Python unless you're programming and see for high performance. I've been programming in Python since 2002 when we had to write our own CSV parser. The world has changed since then. Python has an incredible set of backing libraries, an incredibly active community. I am wed to Python that wouldn't be a programmer without it. And then PANDAS was a game changer for us. Led to us re-implementing urban sim. Let's us do all the stuff that would be an R, that would be in SAS, SPSS, Stata, all these statistical libraries. Now I can do all these things in PANDAS, scikit-learn, stats models, these other libraries in Python. This has been a complete game changer for us. The 2006 version of urban sim was 150,000 lines of code. The current version is 5,000 lines of code. And that's not an exaggeration. You can look at the old one and the new one and do a line count. And this makes it so much easier to maintain, easier to explain to clients, all that sort of thing. And what I find most fascinating is that the people who programmed the urban sim from 2006 were writing PANDAS, and they didn't know it. And so they had to write 80% of what PANDAS does for us today. And they tried to pivot it out to a larger community, and they never quite got the traction. And then West McKinney came along with PANDAS in 2010 and just changed the landscape. And so this is largely an argument to if you have something that is generally flexible, engage with the larger community in order to get that traction, or else your code will die is essentially what it comes down to. So we actually had to rip out all of the things that were so close to PANDAS and put PANDAS in because that's where the world was going. And I've mentioned this before. We are wed to our open source stack. We sell Python and PANDAS and all these things as a competitive advantage for our company. And I believe that it is. And we also teach it as part of an urban sim implementation. Our open source stack is our lifeblood. And we employed someone, Matt Davis. I'll give him a shout out at this point. He does something called software carpentry, which you can hire him and all of his people that work with him to come teach the PANDAS methodology for your organization. Python has a huge ecosystem of tools. We're in bed with all of them. The notebooks, the testing, the documentation. And then we also use a lot of web tools to do very interactive communication of our results. And then GitHub has changed our world as well. Nothing new there. Everyone, I hope, is using GitHub or something very similar to it. But I will say we've had great success with our clients using GitHub. And they are not programmers. Most of the time, I'm speaking to a very different audience than you guys. And our clients love the fact that what we do is completely open to them. We consult for them or contract for them. And they get to see what we're doing on an almost daily basis and know that their money is being well used. OK, so the Urban Data Science Toolkit would be the set of things that we have pivoted out of UrbanSum that might be interesting to you guys, I hope. And one of those is what we call Pandana. So this is a neologism for Pandas Network Analyst. It does a lot of the things that as we use Network Analyst would do, but with a very Pandas-like API. So if you happen to be familiar with Pandas, you should be very familiar with the API for this. And what it does is it does these sort of travel shed things that we've been discussing in this session. This is a network query, a buffer query from the red point. This is the open street map network from downtown San Francisco. And you have this origin point, which is the red point. And then you have some distance, and you go out along the network, and you touch all of the nodes that you can get to within a certain amount of time. Really, you can change that time to any sort of generalized impedance. It's quite flexible. And you aggregate what happens within that buffer. So things that are further away might count less. They have weights that are not 1.0 all the time. And so I can sum the population around this point and get this very smooth surface. So totally different from GIS, which has these shapes. And you're either in one shape, or you're in the other shape. And then one of them is really big, and the other one's medium, and the other one's small. You have these very smooth surfaces that are defined by being able to move out along the network. And for example, so a lot of where I came from was doing things like walk score. If you know walk score, it's accessibility to nearby amenities. And that's a weighted combination of these nearby amenities. So this is essentially a walk score using open street map points of interest combined to show where places are high accessibility or low accessibility. So these very orange spots are high accessibility and the very green spots are low accessibility. If you don't recognize it, this is San Francisco. And every street intersection gets its own query and therefore a value of what's being aggregated around it. And you get these very nice smooth surfaces. And then I can just color those points and drop them in. Or alternatively, we also have this is an unopened source, but it is free, where you can take an analysis with colors on parcels and extrude them into 3D. And you can do this for the whole region. So this is one of those network accessibility maps. I believe this is sushi accessibility, which is always my favorite map. And these are the predominantly Asian neighborhoods of San Francisco. And so we call this Mount Sushi. But you can actually color all the parcels in a very large area very quickly because we're using OpenGL instead of just rosterizing everything. So this is called Geocambus. And you can go to our website and download it and use it. You don't have to use the network stuff with Geocambus. Anything that you can put on a parcel, you can see in Geocambus. And then you just hit a button and it's 3D. Yay. Then we have a workflow tool, which allows it if you happen to have a Python and Pandas workflow where you run steps one after the other and you configure each of those steps and you want to look at the tables and the maps. We have a tool, nothing is specific to Urban STEM in this case. So we hope that'll be useful to anyone who has that sort of workflow. This is configuring the models. This is running the batch jobs that can take hours on a server and check in on their status and that sort of thing. These are tables. This is just diving into all the HDF5 tables that are available through Pandas. This is using leaflet for maps. You can just aggregate things and look at zones on a map. And then our ultimate goal is to be able to work with cities so that you can visualize the future of your city in 3D as buildings. So this is actually not an open source project, but I just want to throw one screen shot up there. This is 3D buildings. And I don't think we have time for the actual demo. Let's do it. Let's do it. Well, so this is what it looks like. I'm not going to do the live demo, but this is an iPython notebook with converted slides. And so you grab your network out of the HDF5 file, a few lines of code. You initialize and preprocess the network. You just grab your nodes and your edges and your weights, which in this case is open stream apps. So it's just distances. Then I can do things like point of interest queries, which would be, so I've got one category. It's going to be restaurants. I got these restaurants from OpenStream app, by the way. And I'm going to do the nearest points of interest for those restaurants. And I'm going to grab the 10 closest. And then I can actually use Mapplop lib. And is that big enough? That's not the best. All these things come out as dot maps. And they actually look pretty nice. So this would be the distance to the nearest restaurant. This is the distance to the fifth nearest restaurant. And you can see it gets greener. And then this is the distance to the 10th nearest restaurant. So this is one way of looking at accessibility. There are very few places in San Francisco that have accessibility to 10 restaurants. Only those places that are orange, which is essentially my mental map of San Francisco. And I'm sort of leaving out the punch line. I don't know why I didn't mention this earlier. Each of these queries runs in less than a second. Walking scale queries for the whole region, less than a second, no problem. For a 45-minute sort of regional scale query, 10 seconds. We run walking scale queries. And you can actually drag the slider back and forth and re-visualize it in real time. So like 20 milliseconds, no big deal. And the other thing is this is San Francisco. I can't even show all the data that we're aggregating. That's my biggest problem is display, not the analysis. I'm not talking about one second for San Francisco. I'm talking about one second for the Bay Area. So 226,000 nodes, this is a tenth of that. So all of the Bay Area in about a second. And it's a general flexible aggregation tool as well, not just the nearest thing. But this would sum restaurants within 500 meters. And you get these very local scale metrics. If I go up to 1,000 meters, it gets smoother. To go to 2,000 meters, it gets quite smooth. And 3,000 is that smooth surface that I was talking about. So if you want the local scale, if you want the regional scale, you can pick and choose. You can use them in models. You can convey them to people, all that sort of thing. And just to emphasize, what I'm really doing is this much data. And it looks like that, which is not pretty. So if anybody wants to help me make this look better, I'm game. Right now, we use Geo Canvas for this. But I'd like to be able to do it in the notebook. Any questions? Yes? Cool stuff. The Sync City website is down at the Vandana. I was trying to find the open source link. And it seems like the Sync City website is down. I don't know about that. But just go to GitHub slash Sync City. OK. Yeah. It's all open source. Yeah. Yeah, the model itself and some of the, you're using the Python statistical libraries to run the simulation. Sure. Can you talk 20 seconds on some of the statistical methods you're using? Sure. So I mean, we use linear regressions on log-transformed outcome variables for home prices. And then most of what we do is discrete choice theory. So we had to come up with some new methods to do choices among very large choice sets. And we choose amongst the large population of, say, 200,000 vacant units in a region. So I have this probability distribution over those 200,000 units. And so we compute all of that within Python. And that's the part that we had to write ourselves. That wasn't widely available. So we come from a discrete choice background. Everything we do is discrete choice. Everything advances in discrete choice. Yeah. Yeah, as far as transportation component, like, did something show what the transportation effects would be like the traffic? Totally. So the question was about transportation modeling. I give presentations on this for, like, land use real estate modeling. And transportation modeling is a big deal. It's actually required by federal law that every MPO has one of these very large-scale transportation models. And they came up to me after one of these presentations and said, can you do the same thing for us? So we're working on it right now. Yeah. Yeah. Hey, Fletcher. How's it going? So you talked about less than one second, massive calculations. Also, your source is open-street map data. That's correct. So can you talk a little bit about what you do to take, you know, say a giant dump from open-street map, you know, gigs and gigs of some data in that form, to get it into the form that you can then do those calculations in under a second? Sure. Fantastic question. So I don't know if you've been following the GeoPandas folks, but they're trying to pivot the pandas methodology into geospatial technology. I have no idea whether or not you're giving a presentation right now. But it's a thing. It should be a bigger deal than it is. And one of their developers has been working on open-street map import. You can now just give a bounding box and get the open-street map data within that bounding box using GeoPandas. But it's not yet a routable network. And we're working on that. It's not a huge step. It will be done soonish, certainly doable. Just as a substitute to stay out of like a postgres database and just do everything in Python. But great question. Yeah. Can you go back to the video links for your demos so we can talk about those down easily? For the what's that? Actually, my Pinterest has pretty much everything. The Urban Data Science Toolkit is one of my, what do they call these things? Pins. Well, but there's like a pinboard or something, right? Like multiple pins go in and nobody uses penters here. That's great. I'm not embarrassed. I hope that's true. I hope that's true. PhD stands for pins is data. My thesis is available. If you go to FletcherPotide.com, there's like a list of papers. And I'm a big fan of the web. Like everything should be on GitHub, on Tumblr, on Pinterest. Like my life's pretty public at this point. Just poke around. If you want to just know about the toolkit, perhaps the best place to start is synthicity.com slash toolkit. Yeah. Back probably around the time when they were implementing the original C++. Back around the time when they were implementing the original C++ implementation of the simulation software, Metro was trying to do a transportation simulation. Or someone was doing it for Metro here in Portland. And I'm wondering if there's any connection between this and that, or if you know anything about that. I don't think there's a direct connection between the two. There's a long history of these overlapping urban modeling teams and academics and that sort of thing. And different regions have a different lineage. It's quite interesting. But I don't know the details on that. Hi. I just wanted to know whether with the network modeling side, the guys looking to implement the routing within Python had looked at like NetworkX? Sure. So NetworkX, about four years ago, my advisor said, we need to bring in NetworkX. And so I took a look at it. And it's predominantly doing graph matching. Like a graph that looks like this also looks like this. And how similar are they? And it was really not well suited for accessibility queries. And the other big thing that everybody's doing is routing. And that's a big deal. There are a lot of really good reasons for that. But no one's actually transformed routing into this analysis engine for network analysis. And so there was a gap. NetworkX definitely didn't cover it for us. OK. Thanks. Thank you.
|
<style type="text/css"><!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--></style>UrbanSim is an open source software platform for agent-based geospatial simulation, focusing on the spatial dynamics of urban development. å Since its creation UrbanSim has been used in the official planningå processes for at least a dozen regional governments which were usedå to help allocate billions of dollars in regional investments in transportationå infrastructure.UrbanSim was first conceptualized in the late 1990's and implementedå using the Java programming language. The technology landscape forå scientific computing changed dramatically after that, and by 2005å UrbanSim was converted to Python, making heavy use of Numpy to vectorizeå calculations. By 2014, it became clear that UrbanSim should be reimplementedå again to take advantage of significant advances in the libraries availableå for scientific Python. The new version of UrbanSim, called UrbanSim2,å makes extensive use of community-supported scientific Python librarieså to reduce the amount of domain-specific customized code to a minimum.UrbanSim is an excellent case study for the power of leveraging thework of the scientific programming community as scaffolding for adomain-specific application, as opposed to building an extensive customizedå solution in each domain. Additionally, the open and participatoryå nature inherent in nearly all of the open source projects describedå here has been particularly embraced by governments, who are oftenå reticent to support large commercial institutions and balkanized andå private data formats and software tools.<style type="text/css"><!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}-->UrbanSim is an open source software platform for agent-based geospatialå simulation, focusing on the spatial dynamics of urban development. å Since its creation UrbanSim has been used in the official planningå processes for at least a dozen regional governments which were usedå to help allocate billions of dollars in regional investments in transportationå infrastructure.UrbanSim was first conceptualized in the late 1990's and implementedå using the Java programming language. The technology landscape forå scientific computing changed dramatically after that, and by 2005å UrbanSim was converted to Python, making heavy use of Numpy to vectorizeå calculations. By 2014, it became clear that UrbanSim should be reimplementedå again to take advantage of significant advances in the libraries availableå for scientific Python. The new version of UrbanSim, called UrbanSim2,å makes extensive use of community-supported scientific Python librarieså to reduce the amount of domain-specific customized code to a minimum.UrbanSim is an excellent case study for the power of leveraging thework of the scientific programming community as scaffolding for adomain-specific application, as opposed to building an extensive customizedå solution in each domain. Additionally, the open and participatoryå nature inherent in nearly all of the open source projects describedå here has been particularly embraced by governments, who are oftenå reticent to support large commercial institutions and balkanized andå private data formats and software tools.--></style>
|
10.5446/31625 (DOI)
|
So just some quick introductions. My name is Justin de Oliveira. I work for Boundless. I'm a contributor on the GeoScript project and some of the other Java projects in the ecosystem, like GeoTools and GeoServer. And I'm Jared Erickson. I work for Pierce County, which is a local government between Portland and Seattle. We're actually just south of Seattle. And I'm a member of Kugos, which is the Cascadia Geospatial Users Group that meets in Seattle. There's a couple here in the back. OK. Yeah. Yeah, sorry. We'll try to be good about talking into the microphone here. OK, so first question. What is GeoScript? Well, GeoScript is a library that does spatial stuff. So it provides you spatial utilities that you would use in building an application or something like that. Sort of along the same vein as a lot of the other libraries are probably used to working with OGR, Google, some of the Python libraries, Fiona, Rastario, Shapely, along that sort of same idea. But GeoScript is targeted at the JVM. So it's only supported by languages that run on the Java virtual machine. And those languages include the following list here. And this list is actually sort of in the order of completeness. But there's implementations for Groovy, Python, and JavaScript. And again, these are the Java implementations. So when I say Python, I don't mean C Python. I mean Jython. And same for JavaScript, not talking about Node or V8, talking about the Java Rhino JavaScript engine. There's also implementations for Scala and Ruby, but they're somewhat less complete at this point. So GeoScript is sort of spatial library for this language, for these different languages. And across the different languages, the idea is to have a relatively consistent API, but at the same time recognize that different languages often have very different syntax and paradigm. So while we try to maintain things like package names and overall structure and utilities, syntax across the languages can vary. And so GeoScript builds on top of the GeoTools library. So GeoTools Java library that provides all sorts of spatial, GeoUtilities, Java-based, it's been around forever. Key parts and a lot of projects such as GeoServer, Udig, and the list goes on. So really GeoScript can be looked at as script bindings for GeoTools. Sort of the same way that a lot of the C-based libraries provide Python bindings. So what sorts of stuff can I do in GeoScript? So the API is sort of broken up into modules. And a lot of these modules are just things that you'd expect from any sort of spatial library. Sorry if that's hard to see. But sort of at the core of the library, there's utilities for interacting with geometries and projections. And then sort of building on that, there's the idea of format abstraction. So all your spatial formats, both vector, raster. And then we get into visualization capabilities, styling, and rendering. I'll talk a little bit about that coming through. And then sort of upper level, we get into sort of more specialized stuff, doing some statistical stuff with plotting, support for, excuse me, geo-processing, stuff like that. And we'll be showing some examples. So the idea is to provide a really convenient lightweight API past what you get with libraries like JTS and GeoTools. And Jared's going to show you a few examples. All right. So as Justin said, this is GeoScript is really a scripting API on top of JTS and GeoTools. So we're going to show some examples of how we make things easier. And this isn't to knock JTS or GeoTools, because those are both really powerful Java libraries. It's just they're very powerful. And with some of the scripting bindings, we're going to make it easier. So the first case is this is how you buffer a point with the Java topology suite. Use a geometry factory. Use that to create a point. But first, I have to create a coordinate first. And then you can buffer it. In Groovy and Python, we get rid of the geometry factory. You just create geometry objects directly. So what was three lines turns into one line of code. So let's take a more complex example. How do you render a map using GeoTools? This is actually the same rendering technology that GeoServer uses. So we have to read an SLD from files. We have to create a style factory in an SLD parser. We have to access data. So we're using a shapefiled data store. We have to wrap those up in a map layer. Feed that to the map context. We create a buffered image. There's a lot of ceremony here. And then we finally use the streaming renderer to actually paint our GIS data to the image. And then we use image.io to actually write it to disk. It's actually pretty terse. But the scripting bindings, we make it a lot easier. So here's the example in Groovy where we're reading a shapefile as a layer. We're attaching a style from an SLD on disk. And we use a map renderer, which takes an array of layers of width and a height, and then renders it out. So we're just trying to make things easier. So let's load some features from a post-gist table. In GeoTools, there's a connection map. So we're putting all of our connection information in that map. We get a data store from that. Then we get the feature source, which is the actual post-gist table. We get a feature collection. Then we use the feature iterator to go through it. We can access the attributes and the geometry. It's not too bad. We have to do a lot of try finalies to make sure that we're closing connections to databases and things like that. In Python, that 10 lines of code turns into like three. We get rid of the whole map data store abstraction. And we just use a post-gist layer. We feed it the database and then some sensible defaults. So localhost by default, 54. Yeah, I screwed up on this code sample. The first line is actually the Groovy version. That's why there's no. Oh, so it's just Groovy Python. Yeah, yeah. So we create a new language for this one. And then our last example is how do you create a shapefile in GeoTools? You use a simple feature type builder. You're probably noticing that in GeoTools, there's lots of design patterns, which is wonderful for building a library, not so much for actually using it. We're building up the schema. You create a simple feature type, which is like the schema from that. We're creating a shapefile. We actually create the shapefile on that line, get the feature store, and then we use a simple feature builder. It gets pretty long. I mean, it's not bad. But the JavaScript version is much more easier to make sense of. We're doing the same thing. We're creating a shapefile. We're feeding it the schema using simple JavaScript objects. And then we add some data to it. OK. So those are just a few short little code samples. Again, trying to stress that the idea is really just to provide as convenient an API as possible. And to make the simple things easy, GeoTools and JTS, like Jared said, do a ton of very cool advanced stuff. But it's not always easy to do simple stuff. So that's sort of the problem we're trying to address. And we can just go through some more samples here of things that you might do on sort of a day-to-day basis. So translating between spatial formats is a pretty common task that people do. And so here's an example from Groovy that shows just taking a shapefile or directory files and converting them into post-GIS. Again, sort of sticking with the whole idea of spatial formats, number of exchange formats, like GeoJSON, GML. And it's really easy to sort of convert between different representations. Analyzing data. So oftentimes, we download data. We have no idea what it looks like, what attributes it might make available, what its distribution might look like, if it's an elevation model or something. So we want to explore. So here's an idea of actually taking a DEM, getting a histogram, and actually plotting the results. So get a nice little bar chart of my elevation values and sort of how they're distributed in my data. So this is cool because then I can take that and maybe create a color map for a style or something like that. So sort of exploratory type task. Again, just sort of getting into general processing. This is taking, again, a shapefile and just reprojecting it on the fly and dumping it into post-GIS. No, this is just reprojecting on the fly. What else have we got here? Generating styles. So this is something that's actually really painful if you just do it with straight GeoTools. So this is something we've really focused on to make quite a bit easier. And so any GeoServer user has seen this style. It's very beautiful. We spent a lot of time on it, as you can tell. And the SLD for that in GeoServer is, I don't know, something like 70 lines or something like that of XML. And we're really collapsing it down to four lines here. This one I was inspired by this morning by the talk from Mike and the idea of chloroplast map. So here's five or six lines of Python that can generate a style that looks like that. And I'm trying to build up my Python functional programming cred here, as you can see. There's lots of lambdas and map reduce and zip. So I think I'm covered. This is a cool example that Jared came up with, taking a raster and cropping it based on not just a bounding box, but an actual geometry. So in this case, using a circle to create a new raster that has the cropped image in it. We're still good, I think. OK. OK, so we have all these great modules in GeoScript for dealing with vector data geometries you're projecting. So how can you use it? So we're going to give you some ideas. The first way is really the reason why we did this. So you can use it on the command line. You can start a REPL redevelop, print loop, and explore your Geospatial data. So this is an example of a session using the GeoScript groovy shell. You can connect your post-just database, list out the layers, grab a layer, you find the bounds, find the count, and then iterate through the features. So you can do that interactively. And then you can also write this as a script in GeoScript Pi, GeoScript JS, and then run it. The second way is this is actually, there is a community module to plug GeoScript into GeoServer. And so you can do things like you can write your own WPS scripts using these scripting languages. You can write filter functions that you can embed in your SLDs. And you can also write just like many applications. So it's like a rest endpoint in Python, JavaScript, and groovy, which is pretty neat. And new in GeoServer 2.6, we did a user interface for this before you would actually have to get on the server and write your scripts. Now we have one of GeoServer's real strong points is it has a really nice UI. So we provided a UI for this. And so this example is showing you the scripts. You can create a new one. This is how you create a process to buffer a geometry. And later, Justin's going to show a really nice example of a much more complicated web processing service that he wrote using Python. And then there's GeoScript also comes embedded in UDig in the spatial toolbox. And the neatest thing about this is we didn't have to do it. The UDig developers did it. Andrea Antonelli, whose name I hope I didn't mispronounce, he did this. And so GeoScript comes embedded so you can run GeoScript's from UDig. So this is an example of using the style API that Justin was showing and actually creating a style and then displaying it on the screen. The one limitation with scripting support in UDig right now is you can't access the UDig object. So if you've done any Python scripting like in QGIS, where you can create a layer and then add it to your map, you can't do that yet in UDig, but still it's pretty neat that it comes embedded in UDig. And then also you can use it as a library. So you can embed GeoScript in your Java applications. It's as easy as adding a dependency in Maven. So you can use this to write your own Russell web services, to write your own web apps, command line apps, desktop apps, whatever you're going to write. And here's the same sort of dependency information for Gradle. OK, yeah, just quickly mentioned. So I don't know if you guys are familiar with location tech, but it's sort of this kind of a newer initiative to bring together a lot of spatial projects, sort of along the same lines as sort of what OS Geo does. So yeah, and GeoScript is actually an incubation for location tech right now. So we're still very early stages with it, but we're definitely looking forward to that. So we've got about five minutes here. So we're going to talk and show you some sort of real world examples. Jared sort of hinted at this before. Do you want to do, which one do you want to do first? Do you want to talk for years? OK, sure. So we have two real world examples. The first one is sort of like a typical GIS analyst spatial analysis type of an example. And my goal was to take the West Nile virus data from the CDC and actually create an animated map. So this exercises several things in GeoScript, to being able to read and write spatial data and then being able to render it all in one library. So I went to the CDC to actually find this information. The information is in a PDF, which is great for printing, but it's not so good for scraping. So one of the really key selling points of GeoScript is it targets the Java virtual machine. So you can access in a GeoScript script all the Java libraries. So any problem you have, there's probably five libraries that will solve it for you. You just have to pick one. It's an embarrassment of riches. So we're using itext to actually extract the text from that. And then there's some really boring code to actually turn that extracted text, because it's not pretty into a CSV file. But this is what the CSV file looks at. And then we're downloading data from natural Earth. So you just actually, this isn't using any GeoScript. This is just using Java features and Groovy features to download the zip, the state-shaped file. And then we actually want to join all of that data on the counts of West Nile virus incident per state per year. So we read in the state's file, we create a new layer. We write all the features. And then we actually, down here at the bottom, we actually iterate through the features, find the data in the CSV file, and then attach it. So this is updating shapefiles in place. And then finally, you have the interesting part of actually drawing those maps. For each year, we're going to create an image, creating a gradient style, which is one line of code using equal interval. And we're actually using color brewer, which that's what the reds is. That's a color brewer style that Mike Pox Bostock talked about earlier. We create the image with our map. And then this part, I thought was neat. I wanted to draw the year on top of that image. So we provide really nice APIs to wrap GeoTools. But you can always drop down to Java and use nitty-curity Java 2D stuff. And then we just did a really nice animated GIF. Because I mean, everybody loves animated GIFs, right? So he wrote really nice support for that in our library. And so this is what you end up with. And I put the Stamen watercolor base map underneath it, because that just makes demos nicer. There it is. All right. So real quickly here, so sort of a second example. And this is actually something that was actually done by some folks at the USGS. They were experimenting with some of the GeoScript stuff to do some processing. So this is kind of a simplified version of it. But basically, the idea here was to take some protected LANs data and basically do a classification over a specific area. And this is sort of what the application looks like. But I can actually show you a demo. So what this is doing here is it's using the scripting support in GeoServer. So GeoServer has a processing extension, WPS. And so not only can you serve up maps and have data, serve up data from your server, but you can actually run your own processes on it. And so it's kind of a perfect use case for GeoScript, because often in processes are very sort of ad hoc. You want to be able to prototype them quickly. So it's kind of a nice match. So I'll show you some of the code here. But actually, let me cut to it now. But this is sort of what a process, sorry if that doesn't show up well, looks like in GeoServer, this is in Python. I'm going to gloss over a bunch of the details here. But it's about 38 lines. And it's showing off some intersection, fancy stuff. But so I'll cut back to the browser here. So the idea is I add my protected lands overlay, and then just draw a polygon on the map. And then I get a nice classification right there. And this is all being done on the server by that process I just showed you. So I'm pretty snappy. So again, the chart is using D3. Yeah, so again, this is sort of what a process looks like. You define a run function. You have to define some metadata about inputs and outputs. And this is really sort of part of the WPS protocol. And then very simple, I can access my data. So I have this protected area data set. And then I basically just take that polygon that the user draws on the map, iterate over some features, do an intersection, simple area calculation, and sort of aggregate the results. Sorry, I'm missing the very bottom part there. So yeah, so pretty easily able to sort of extend GeoServer through Scripps, which is really nice because you can do the same thing in Java. But you have to recompile everything. You have to restart the server anytime something changes. Scripps, the way the Scripps engines work, it's all loaded on the fly dynamically, as you would expect from a scripting environment. Is that it? And yeah, so that's it. Thanks everybody for listening. I think we have a few minutes for questions. And if you ask a question, please wait for the gentleman with the mic to come around to you. So just raise your hand if you have a question. Up here? Oh, one there. How much of a performance hit am I taking for going with scripting? Good question. It's really dependent on the language. So with the Python one, you actually take quite a bit of a hit. So compared to C Python versus Jython, there's quite a noticeable difference. Other languages, not so much. Like the Java version of Ruby actually claims to be faster than the C version of Ruby. So it's really dependent on the language. But sort of crossing the barrier from Java to the scripting engine is fairly minimal. What's the level of feature parity between all the different versions? Because you've mentioned Python, Ruby, Groovy, and JavaScript, and I have to believe that there's at least some level of difference. Yeah, and there is. So going back to that first slide, the Groovy implementation, which Jared is responsible for, is the most complete. Python is fairly close. And then JavaScript, there's a few things it's missing. And then it drops off pretty fast. So one of the things we actually want to do, and this is part of the location tech incubation, is really standardize on that and come up with a version scheme that indicates, hey, this is how complete this implementation is. And just really just get on people to add the features to get them up to parity. So yeah, you can just go to find out more or download software to play around. Just go to geoscript.org. And yeah, thanks. And with that, we'll see you in quite a bit.
|
GeoScript adds spatial capabilities to dynamic scripting languages that run on the JVM. With implementations in Python, JavaScript, Scala, and Groovy, GeoScript provides an interface to the powerful data access, processing and rendering functionality of the GeoTools library.GeoScript provides concise and simple apis that allow developers to perform tasks quickly making it a great tool for the day to day data juggling that comes with geopspatial data. This talk will focus mainly on real world examples that showcase the power of the library.Come check this talk out if you are interested in learning about a new tool to add to your geospatial hacking toolbox. Maybe you have tried to use GeoTools but find it too difficult and complex to use. Or perhaps your java skills are not where you would like them to be. If that is the case this talk, and GeoScript, might be just what you are looking for.
|
10.5446/31627 (DOI)
|
Good afternoon. I'm Derek Swanipal. I'm from the remote sensing research unit at the Council for Scientific and Industrial Research in South Africa. And if that's a mouthful, I haven't even gotten to my title talk. So the title of my talk is Fast Big Data, a high-performance system for creating global satellite image time series. I'm going to skip over the outline here in the interest of time. To provide some background, many research and operational geospatial applications analyze data through time rather than space. Some examples of these would be climate change applications, land cover change, crop monitoring or fire history analysis. And these applications need rapid access to extended hyper-temporal time series data. The primary focus of our research is on MODIS, which is the most widely used satellite sensor for time series data. But this really is applicable to any credit type of data set. It could even be used for Landsat if you credit with something like Weld. And then just for those who don't know what a time series is, if we take a stack of images over the same area, the time series is simply the set of values for a particular pixel and can be represented like a graph like that. And a concrete example of an application that depends on time series data is a mobile app that we've developed. And among other things, it provides MODIS-based vegetation and curing and fire history for any point on Earth. And this data is instantly accessible to the user who clicks on a point or taps on a point on a map. And rapid access to this data is impossible if we use the raw data in its original spatial form. And in this presentation, I'm going to show how we transform this data into a time-optimized form that's usable by applications like these. So we're really off to the challenge of global time series data. And we're talking 175 MODIS tiles over three or four different products comprising nearly 176,000 images and more than four terabytes of raw data. If I can just quickly skip, this is sort of a live view of our collection of MODIS tiles, different products that indicate it, but different colors. So that's really a lot of data. I think we can take the big data box here. So we want to convert the spatial data to a time series, but that is quite a resource-intensive task. Previous research done in my group looked at how to do this in a resource-constrained environment. So we only have a sort of a desktop class or a small-server class type of hardware with about eight gigs of RAM or so. But I'm going to present how we did this in an unconstrained environment where we've got enough RAM, enough processing power to do this without resorting to strange hacks to get all of this data into time-optimized form. I'm describing a fully automated high-performance system using commodity hardware, so no fancy supercomputers or clusters required. So just to give a little bit of a system overview, we've essentially got a component that downloads data automatically every eight days or whenever new MODIS data is available, verifies its integrity, and then a once-off process that looks at the MODIS tiles, finds out where the land bounds are so we can trim away all the ocean node data which we don't care about. And then from these collections of data, we build data cubes which I will describe just now. And then from these cubes, we can produce maps or we can serve up the data through as time series to various different applications. And this whole system is constantly monitored every step of the way and whenever something goes wrong, it will report it to us with an email or similar. And then some of the components in this system, nearly all of our applications are built on top of Google and HDF5. The heart of our system is the DataCube API which is a library written in C++11. And our cube building infrastructure runs on top of that. Various cube applications that use the data or process it further also is built on top of it. And then we've also got a Python-based cube server which serves this data via HTTP using a JSON API to either web apps or mobile apps. And then all of this depends on the storage area network that stores all our data for us. Now, for those of you not familiar with what a storage area network or a SAN is, it's essentially a high-speed, large capacity storage infrastructure that's built on top of a dedicated network. And it looks a little bit like this. So we've got storage servers connected to processing servers using 10 gigabit Ethernet, using iSCSI as a transport protocol. And this is actually quite cheap, depending on who you are, of course. We built this for about $50,000, but that's because we're far away in South Africa. And if you did this here, you probably would spend about $40,000. And we make good use of this infrastructure to be able to provide time series at this magnitude. Now, the key here is that accessing a pixel value through an image stack is really, really slow. To read one pixel using the Python-Goodle API from, say, 660 motors, MCD 43 images takes all two minutes. And that's really because we're dealing with extremely poor data locality. Even though that amount of data is just about over a kilobyte, we're picking that out from 34 gigabytes of raw data and we're opening 660 different images using Google. There's just a really massive amount of overhead. So what we need is a time-optimized data structure to store this data. We put each pixel's time series together and then we, in order to do that, we transpose the data from its original spatial form into a temporal form and then achieve optimal data locality for time series purposes. And we use a data structure that we call the hyper-temporal data cube. Now, that sounds very fancy, but it's in the end quite simple, really. We start with, say, a stack of images. So we've got t-images here of w width and h height. Each of them have up to n bands. And while this is not the data cube yet, this is simply all the data that we're going to use. We start with, say, the first pixel there and we transpose it, put all these pixels from all these images together, and then we get the time series of pixels zero. So we move on to the other one, take pixel one, there's its time series. And so we go through the whole image until we've transposed all of their time series into little bits that are close together. Now, as I've described, reading one pixel throughout all these images is so slow that it's completely unfeasible to do it that way. What we actually do is we take one image and then we spread its pixels into a large output buffer. So say for the first image, we take all of its pixels and put it into position t zero. The next image, all its pixels go to t one and the next one go to t, t two, and so on until we've processed all the images. So this gives us a data structure where we can rapidly read each time series sequentially without having to skip over all of this massive data. And this is really what we call the data cube where all of this data is reorganized in a temporal form. Now, just to talk about the data volume for a bit, if you look at motors, 500 meter data, every tile is about 5.8 million pixels per band. For MCD 43, there's seven bands, so that's about 40 million pixels per image. There's a 667 of these images to date. So that's almost 27 billion pixels per cube. And then for 16 bits data, we're talking about 50 gigabytes of data for one cube. Now, does this all have to be in RAM at once? Not exactly, but if you do memory constraint transposition, it's extremely inefficient and slow. So if you've got a server with limited amounts of RAM like four, eight, or even 16 gigs of RAM, you're going to have to really just go multiple times through the data in order to do this transposition. And in doing so, you're not getting any cash benefit from the operating system because you're reading multiple times through the data. Every time you get to the end of the data set, you've evicted all of the stuff you've just read out of it, so you really just thrashing those caches. So how much RAM is enough then to do this? Well, in order to make only one pass to the data, you pretty much have to get the whole thing into RAM, so that's 50 gigs of RAM. But RAM is so cheap these days that it's really not a hard problem to solve. 256 gigs of RAM like we have in our server is not actually a big deal anymore. And although this increases by about 3 gigabyte per year for motors eight-day data, it's not going to be an issue anytime soon. Hopefully motors will last us that long until it becomes an issue. So this is basically our queue building process. We've got a block that allocates all the memory we need at first. And then we have three threads that start processing various bits in parallel. So we have a section that caches the files, makes it faster for goodwill to read them in. Another thread that reads all of these bad data into buffers. And then the last thread that transposes it into the eventual output buffers that then get written to an HDF5 cube. So an interesting aspect of time series cubes is that updating them requires rebuilding them from scratch. And a reasonable question to ask would be why don't you just leave space at the end of each time series and then append little bits of data in there? Well, it can be done, but it turns out that every chunk in an HDF file will be modified if you append data like this. And if you use compression, which we always do, this entire file changes anyway. So you're rewriting the whole thing and it turns out to be no faster than rebuilding it from scratch. But with enough memory like we have, we can do a bit of optimization and rebuild this cube using an existing cube. We read the whole existing cube into memory, put that RAM to good use, transposed only the new data which might be one or two or three images. And in that way, we get about a 35% performance improvement than building the whole thing from scratch. So just to look at some concrete examples of this improvement, if you look at the baseline, this is our old way of processing this data with limited amounts of memory in a constrained environment for this tile H20V11. This covers about the northern part of South Africa and it's a tile with 100% coverage, so mostly land, can't crop away any no data here. That used to take about 42 minutes to build a time series optimized cube out of that. In an unconstrained environment, that goes down quite a lot, so almost 12 minutes. Obviously, we can't crop this tile, so that stays the same and then updating it, we squeeze a little bit extra performance out of it. Ultimately, we can build a tile like this about four times faster. And then looking at a smaller tile that can be cropped to about 33%, this is one with a lot of ocean. Baseline constrained environment, it would take about 27 minutes to build a tile like this. It rapidly decreases once you've thrown off a rammer at just about eight minutes and cropping it halves that again. And also a tile like this has a really fast update time because there's not a lot of transposition of no data that needs to be taken care of. So ultimately, for a tile like this, we get a four to ten times improvement. And the important part here is really that these speed improvements allow us to process the entire Earth's motor styles in about a day or day and a half, which allows us to really provide time series at the scale without tying up lots of computing resources for a week or more than a week. But we did face a couple of challenges in this whole process. We discovered that GZIP decompression or deflate, as it's otherwise known, is a serious bottleneck. We can read from our sand at 1,400 megabyte per second, but the CPU can only decompress this data at 100 or 200 meg per second. So we're really not getting the benefit of all this fancy storage if we're decompressing data at that rate. We wanted to look at multi-thread transposition of the data as well, but it turned out to be an absolute nightmare for the poor CPU. It's really the worst way in which it can access the CPU cache. If you've got multiple threads trying to write to the same bits of memory, you end up with really a terrible problem where actually only one thread can do this at a time. So multi-threaded transposition ends up being no faster than single-threaded transposition. And then a massive bummer is that HDF4, which the raw data is stored in, is not thread safe. So there's no concurrency possible. You can't read multiple bands of modus data with multiple threads in the same program. Then we also discovered that, well, HDF5 can be thread safe, so you can throw multiple threads at it, but it uses global locks, so there's once again no concurrency possible. And this is really disappointing for a supposedly high-performance scientific data format, not being able to use multi-threaded programming to access data that's stored in it. Furthermore, it only supports the CAPI and not C++, so that is a bit of a setback as well. So we thought about using parallel HDF5, but that comes with its own set of problems. Again, no C++ support for that, no thread safety either. So although it can do parallel access of data, you can't write parallel compressed data, which is pretty much what we want to do. And it comes with a whole load of other limitations which made it somewhat unsuitable for what we wanted to do with it. So that leads me into some future work that we'd like to do. In terms of the performance side, we really want to investigate recent improvements that Intel made to the Zlib library, which pretty much all decompression is based on when we're talking about deflate. So they've done some SSE3 improvements that can really, really speed up the decompression of data. We could also pre-compress the bands in memory before we write them. So there's HDF5 functionality for doing this. And then ultimately what we would like to do is implement a proper multi-threaded C++ API for HDF5. And hopefully if I'm back next year, then this actually worked. In terms of accessibility of our data, we'd like to investigate the idea of creating three-dimensional data sets, so time, by width, by height, instead of the current two-dimensional data sets that we have. So we have T times N, which is the width times the height. But that doesn't make the data very accessible to things like HDFView or, you know, your average HDF software. We'd also like to use NetCDF, which would open up this data to things like PyDap. Currently we use our own Python implementation to get the time series data out, but that's obviously not so efficient in terms of accessibility. And then again to just get back to some concrete examples of how this data is used, we've got this mobile app that we've developed which provides fire data for the whole world. It also then provides some motor space vegetation data. So on that graph there's a white line there is the curing history, so that's an indication of how dry the vegetation is. And then on that line are all the occurrences of fire with their dates. And really this relies absolutely on speedy access to this data because you can't expect a user to wait minutes just to draw a graph like this. And then we've got a web viewer as well. This is essentially just a Google Maps interface where you can click on a point in it. It gives you various vegetation indices and other motors-based indices. Users can enter their own LAT-longs and hopefully in the near future upload shape files and get averages for their areas. And then finally we have got a QGIS plug-in which does very much the same thing. Any data that you can display in QGIS click on a point and it gives you the time series values back for those points. Thank you. APPLAUSE Any questions? Hi. I just want to confirm why do you use HDF? Is there other options for file formats that you can consider? We experimented with a number of file formats. We tried just flat files or storing each time series as its own file and using the file system itself as the sort of database. But all of those options are quite slow. The nice thing about HDF is that you can include some geospatial metadata with it and not just store two-dimensional data but any number of dimensions. So for some of the data we work with that's quite valuable, especially the multi-band data. This is really interesting. I also do Modest Time Series generation and so it's cool to see how other folks have done it. The questions I have, the first one is how much time do you spend on actually doing the transposition into time series versus saving as HDF5 to disk? And I wonder if using Hadoop cluster, for example, to actually do that transposition and then using something else to do the post-processing and then HDF5 might save you some time. Maybe not, just a thought. And then the other question I had was whether you can do sort of aggregate statistics over a region using your setup or is it really designed for doing specific pixel lookups most efficiently? So if I wanted to know how many fires happen in California, could I find that out easily? Okay, so in terms of the transposition, that really depends on the type of tile, the Modest tile, because the no data values tend to transpose a lot faster and also if we crop a tile and trim away the no data, then things go a lot faster than for the fully land-based tiles. But I'd say transposition takes about the same amount of time as reading. So that would be, I would say, about three minutes, two and a half minutes for both reading and transposition. And then writing out a file is actually quite slow because of that single threaded compression. We use both ESIP and GZIP compression depending on the type of data. Certain data compresses better with GZIP and others better with ESIP. But compression can take up to about three minutes and writing it out. It's a lot faster now that we have the sand because we can write at about 800 or 900 megabyte per second onto the sand. So that does chop off some of the time. And in terms of the spatial aggregation, it is possible, depending on what you use to read the data, I think if you use sort of Google's C++ API, you can read millions of pixels in a very, very short time. So you can definitely spatially request time series for many, many points and then do statistics on top of that. But if you want to get a slice of the data to produce an image, this data structure obviously is again once again, it's completely the opposite of what you would need to be able to produce an image. So don't do images for this. Hi. That's pretty interesting. You mentioned trying to do a certain problem with multiple threads. Presumably it was transpose or something on the same cube I was presuming. And I'm wondering could you possibly have, could you possibly break down your data into many more cubes so that the data is more decoupled? I'm not sure. I'm not used to MODIS, so it may be a stupid question. It is possible, but it makes handling the source data a bit tricky because MODIS tiles 2400 by 2400 pixels. So we would have to split up the input data and read only sections of that in order to build smaller cubes. There's other ways in which we can get around that problem of not being able to access the raw data with multiple threads. And that's simply to actually have different processes that all write to a shared memory. So it's a quick win. Although we prefer to try and address the root of the problem, which is really just this OK library. HDF4 was written 20 years ago when doing stuff like this was impossible. So it wasn't really a concern to be able to read data from multiple threads. And actually, GUDL has a global lock over all HDF4 operations. So GUDL has made HDF4 thread safe, but obviously at the expense of not being able to use the data multithreaded. Where does all this code live? I was waiting for that question. It is still in the research phase, so I'm afraid there's no GitHub URL like many of the other presentations. But I do hope to be able to use it in the future. So I'm afraid there's no GitHub URL like many of the other presentations. But I do hope to be able to make this available within the near future. Unfortunately, this also depends on sort of our research budget to get this stuff into a publicly accessible form. But I hope to be back at Phosphor G with a URL where you can download this. Any more questions? Thank you very much.
|
Description:We describe a system that transforms sequences of MODIS images covering the entire Earth into time-optimized data cubes to provide rapid access to time series data for various applications.Abstract:Satellite time series data are key to global change monitoring related to climate and land cover change. Various research and operational applications such as crop monitoring and fire history analysis rely on rapid access to extended, hyper-temporal time series data. However, converting large volumes of spatial data into time series and storing it efficiently is a challenging task. In order to solve this Big Data problem, CSIR has developed a system which is capable of automated downloading and processing of several terabytes of MODIS data into time-optimized "data cubes." This time series data is instantly accessible via a variety of applications, including a mobile app that analyzes and displays 14 years of vegetation activity and fire time series data for any location in the world. In this presentation we will describe the implementation of this system on a high-performance Storage Area Network (SAN) using open source software including GDAL and HDF5. We discuss how to optimally store time series data within HDF cubes, the hardware requirements of working with data at this scale as well as several challenges encountered. These include writing high-performance processing code, updating data cubes efficiently and working with HDF data in a multi-threaded environment. We conclude by showing visualizations of our vegetation and burned area time series data in QGIS, web apps, and mobile apps.
|
10.5446/31629 (DOI)
|
This is a session on mobile vector maps with Mapbox GL, slightly different name than what's in the program. Mapbox GL didn't have a name when we submitted the proposal back in February or March or so, but ultimately that's what it became. And yeah, I'm going to talk about the Mapbox GL stack as a whole, but more specifically applications for it on mobile devices. Quick introduction. I'm the mobile lead at Mapbox. I live and work here in Portland remotely for the company. And mostly working on mobile tools, largely iOS and mobile strategy for the company. How many folks are familiar with Mapbox? Most everybody? OK. We're basically building open source tools for custom map design and development that are all free as in beer. So we're trying to encourage an ecosystem around the tool set as well as a business model. And then the business model is built around cloud hosting of custom maps for apps and websites, be that. Raster tiles or moving to vector tiles and other geoservices. But basically cloud hosting is the business model around it. And we're at about 55 folks right now, headquartered in DC and San Francisco, and then a bunch of us worldwide, like myself. Mapbox GL is our name for our on-device vector rendering. Like all of our other tools, it's completely open source under a BSD model. So regardless of whether you want to be a Mapbox customer or not, you can both contribute to and use the open source projects that have come out of this stack. And the main thing that we focused on with this and the main reason we created it was because the GL is a nod to OpenGL, which represents as the technology behind GPU acceleration. You could think of this as video game technology, essentially. That's the largest use probably for OpenGL technology, to new ends. And GPU accelerated all of our computers and practically, basically every mobile device nowadays, has a GPU as well as a CPU. CPU is your normal processor and deals with the load of all the tasks that you're trying to accomplish with your phone. GPU sometimes is not being used to full effect. And it's another processor that is capable of massively parallel computations across every pixel on the screen many, many times per second. So when we talk about the GPU, what we're doing is writing specifically for that processor so we can tap into that and offload a lot of the processing into hardware that's dedicated and specialized in rendering graphics and dealing with pixels and representations on screen. So it's probably useful to jump right into a couple of demos and explain if you're not familiar with raster and vector and what the difference is and why this is important to us. When you deal with raster tiles, raster tiles are static imagery. This is an example tile from Zoom level 9 of the Portland area. And if you were to use this in a mobile application or even a web application, as you zoom in, it gets a little bit bigger. You zoom in again. It gets a little bit bigger still. You zoom in again and it gets a little bit bigger as it approaches this breaking point of needing to move to the next Zoom level. But at this point, you've got kind of fuzzy labels and everything's just been scaled and it's pixelated. When it gets to that breaking point, it transitions into four tiles at the next Zoom level down. And you can see as it does that, there's quite a jarring effect between the two Zoom levels. The labels are roughly in the same place, but there's a different density of labels. They've all kind of come at the same time. And in fact, in most cases, these representative four tiles would load at different times, roughly about the same time. But overall, it's kind of a jarring experience. And that's one failure of raster tiles or one limitation of raster tiles. Another one is rotation. If you've got those same tiles in arrangement and you are rotating the map in response to the user's compass direction, for example, or just free rotation in order to explore the map a little better, the labels rotate with the tiles and everything's kind of baked in as raster imagery. And that can range from annoying to practically useless as you twist the map around. And so that's another limitation of raster imagery. Effective tiles, on the other hand, you can't see them. They're just data. They're the representative vector data behind the features that were rasterized in raster tiles. But this tile pyramid concept works the same way where you start with one and you zoom in. And that becomes four at a higher level of fidelity. The difference is you've got continuity between the zooms. As you move between the zoom levels, there's no sudden jump from the one tile to four tiles or back again as you zoom back out in the same way. Because although things are still rendered across the screen, possibly in a tiled fashion, everything's very fluid. And I'll show you what I mean a little more concretely in some examples. But label placement is probably the most stark example of the power of vector map rendering. You go from something on the left where you've got two discrete zoom levels to something on the right where you've got essentially infinite zoom levels in between from the point of view of what is rendered on screen. And so this is an example of Mapbox GL with zoom dependent styling. So as you zoom in, you can change both the features that show up as well as line thicknesses, which labels appear. You'll notice that labels fade in and out once a label appears. As long as you keep zooming in, it never goes away. It anchors to a particular place. And so it's a lot more fluid experience. You can do zoom dependent styling. This is an example we had on our website that illustrates the difference between going from zoom level 14 on the left to zoom level 16. And this is a definition in the Mapbox GL styling language of what we should do for this particular line width. It's listed in stops. Pares, 14, 1, 15, 3, 16, 4. That means at zoom level 14, we want the line to be 1 pixel. At zoom level 15, we want it to be 3 pixels. And at zoom level 16, we want it to be 4 pixels. And it's interpolated smoothly in between those zoom levels the entire time the slider is moved and the zoom is done in and out. And it's the label stay in place. You've got this appearance. You may not even notice that the line thickness is changing at all, because your perception of it is just that you're getting closer to it. You're zooming in. You're looking at more detail. But the rendering effect is actually applying different line widths based on the zoom level continually. And so, of course, naturally extending from that, you've got the ability to rotate and have labels always stay upright. And when they collide with each other, have less prominent labels fade out for more prominent labels. So you can see quite a bit more flexibility in what you can do and a much smoother experience. So you're not limited to only vector and only raster. Rasters aren't going away, particularly with satellite imagery. Satellite imagery is probably, I don't know, I won't say for our foreseeable future at some point. Who knows what's going to happen with that? But for the most part, it's pictures from space being taken, and so they're raster imagery. But you could always combine them. You could use satellite imagery underneath and then partially transparent roads and labels in vector drawn over top and combine layers of tiles, essentially vector tiles being rendered on the client and then raster tiles that have already been drawn, or in this case, photographed. Another thing you can do with GL is style transition. So this is a looping example of transitioning between, there's about 50 layers of data going on here, and each of them has two different styles. We've got a daytime mode and a nighttime mode. And I've kind of just put it on loop here for demonstration purposes. But the entire time the rendering is going on, it's essentially playing like a video. There's 30 or even 60 frames per second that are going by in an animation loop as you're interacting with the map. And you've got the capability of doing all sorts of transitions, whether it's color, it's fonts, it's size, it's edges zoom in, perhaps buildings fade in and start to appear and they don't show in lower zoom levels. And you can even do some more crazy things like superimposed video. This is raster imagery rendered in GL. And then superimposed on top of it is frames from a video shot with a drone on mand aerial device. So you've got several month old satellite imagery on the outside and then, like yesterday, satellite imagery with people on the beach and the waves and whatnot. And so even though this is raster imagery in a video, because of the fact that we're rendering it continuously akin to an animation or a video, you've got this sort of fluidity potential. So that kind of highlights some of the eye candy from GL. Let's talk a little bit about the stack and what's involved in it and basically how we're building it out. We're doing it in a dual stack approach. We're targeting one set of software for native devices, mobile devices, as well as desktops, although that's more for testing purposes, and another set of the software for JavaScript for the web. I will point out that the JavaScript variant is not designed for mobile. To contrast that with something like Leaflet, where you can design a web map and then look at it on a mobile device and Leaflet works really well in that case. And this is largely due to, to date, pretty subpar GL experience on mobile devices, web GL experience in browser. In fact, on iOS devices, it's not even possible to use web GL, use JavaScript-based GL technology on a mobile device. So not yet. So they are two different stacks. The native stack we're developing in a native set of technologies, C++, C++11 variant, a little more modern, C++, because we could. And then atop the C++ native component that we're building, we're doing per platform binding. So that means on iOS, writing native Cocoa Framework Objective C Swift code so that iOS developers can come in and integrate with their existing or future Objective C and Swift projects. And similarly, the goal for Android is direct Java interaction, so you don't even necessarily have to be a C++ programmer to use this in a native platform, native for your mobile operating system. So all of this, including these bits of the stack, are open source on GitHub as well, currently. The native stuff we put out in preview back in June and the web stuff came out in August. And we also aim to support non-mapbox sources with this, be that raster tiles and other implementations of mapbox vector tiles. And I mean, it's wide open, so we could hook up to lots of other sources, too. So you could put regular old slippy map tile URLs in there. And that's a goal of the project ongoing, the idea of mashing up sources. So high level goals, I'll be a little unsubtle. One overarching goal with the project, probably the reason, if I could boil it down to one reason why we built this thing, was for full design control. We want to enable the ability to have carte blanche when it comes to designing maps in a vector environment. And that's kind of an overarching goal of all of our tools. We built tools for designers and developers to build custom maps for apps and websites. So it fits right in. I mean, the real high level goals of the project are to run on mobile and web using the best frameworks. We did some experimentation with going across between writing in something like C++ and getting that to work on the web, writing in JavaScript, trying to get that to work on native devices. We decided to split the stack and go to parallel similar stacks, but each in their respective technologies. Best for their respective environments. Another goal is we want it to be very easy to get up and running in apps and websites, and fun to design and iterate with. I mean, it's a lot of fun being able to have this sort of control with your base map or with your data overlays. And of course, it's built on open standards. And we'll create any along the way if we feel there's a need, for example, like the vector tile format that we built, which we did that so that essentially what our stack does is watch a live change feed of OpenStreetMap edits. And within about five or 10 minutes, we can turn those around into vector tiles for the entire planet continuously. So when you're building on top of this sort of stack and using MapBox vector tiles, the OpenStreetMap edits are coming through about as close to real time as you can get. So you've always got the most up-to-date map. So there is a scenario possible where you could have an in-the-field OSM editing application where you had the building that you're in to OpenStreetMap and within several minutes see it in that very application coming back down from OpenStreetMap. So the vector tiles themselves, I'm not going to go too much into the format. In fact, my colleague Dane has a talk today, about two hours, in session three, track four, about vector tiles where he will talk about vector tiles and pasta and the interrelations between the two. So go hungry. The basic tenet of the vector tiles is eliminating the database, particularly for global maps, for this global data set like OpenStreetMap. Instead of worrying about indexing databases when you're trying to customize the look and the feel of your map and your application, we pre-tile the data the same way raster tiles are divided up by x and y in a grid and z level for multiple layers of resolution. And we also do things like pre-optimize with polygon simplification. So what that means is if you're looking at the entire metro area and perhaps there's a lot of individual complicated polygon borders that are being shown, and so when you're sufficiently zoomed out, those polygons have been simplified so that they're not trying to render all of them in a very high level of detail that you just don't need to be looking at right now. So that kind of stuff is all taken care of automatically by the software in the processing phase of things. And the spec for the vector tiles is also open on GitHub. So we publish specs as markdown files or text files on GitHub, so you can check that out as well. But back to mobile in particular, a couple of things that we've rolled into here that specifically focus on mobile considerations are the fact that you can composite on the server. So for example, I'm specifying here that I'd like to use Mapbox Streets, which is our OSM data set. And I'd like to composite that with a custom map that I made, Justin.blblblblbl, and those are combined so that for every square of screen space that needs to be drawn, I've got my layer on top of all of OpenStreetMap, those come down as one tile, and they're already pre-composited in the data. We do this with raster tiles as well, but with vector tiles we're also supporting compositing so that you basically have one data source per square of screen that you need to draw. Unless, of course, you're pulling from another source. When things are coming from Mapbox, you have the capability of compositing them. For vector tiles, the tiles are smaller and less numerous. Not making any hard promises, but in general, they're 10% to 25% the size. And you also don't need any vector tiles generally beyond Zoom level 14, which vector tiles can be over zoomed upwards of four to six Zoom levels beyond the layer they're designed for. So all the OSM data, for example, that Mapbox is serving out, only is served in vector up to Zoom level 14. And then you're able to, even at Zoom level 20, figure out what tiny, tiny piece of the parent Zoom level 14 tile you are currently in and is able to magnify the vector features. They're vectors. They scale mathematically. And they provide enough resolution to get very, very high Zoom levels. So once you've hit 14, you don't need to download anything anymore. And if you cache a bunch of 14, for example, for a metro area, you've now got that entire city, every point of interest, every street, every building outline with you already cached, and you don't need to download anything else. You just need to render it. So you can more easily go offline. If you take a count into consideration, so this, going back to our friends from the other animation, this is a Zoom level 9 tile, the first one, and then it's four children in Zoom level 10. This is a huge area, by the way. This is, I mean, you can see that's the city of Portland there, and this is like an hour from Portland. So it's a pretty large area. But I just used it because it's the tile we had before. So if you were going to take this area offline, you'd need, for the raster tiles, you'd need Zoom level 19, all the way down to maybe 17. So you'd need the 9. You'd need its four children at 10. You'd need those 16 children at 11. It keeps going from there. In fact, it'd be about 87,000 tiles to take that area offline. Granted, that's a big area. But to take that offline, you'd need to download 87,000 raster tiles. To do that, the vector tiles, you're talking about 9 through only 14, because you can draw 15, 16, and 17 from the 14s. And so the equivalent area is about 1,300 tiles, which are smaller amounts of data as well. Put that in a little bit smaller scale. If you're looking at a Zoom level 13 or 14 tile to start with, maybe two kilometers on a side, to go offline with that, down to 17, you'd probably need 65,000 tiles. Thereabouts, to go offline with a vector tile, you'd need one or maybe five, one and then the four below it. So you're talking about the difference between 30 minutes to take an area offline by downloading all those tiles to the device, and maybe five seconds, eight seconds, something like that. So you've got a lot more potential to take applications offline in mobile devices. But the other thing about mobile is mobile doesn't just mean a phone, a smartphone, in your pocket. Mobile is referring to personal and hyper-local, not to get too buzzword heavy, but the idea that it's a device that's always with you, and it's highly specific to your location or your context. And so it's really a pocket computer with an ever-growing, by the day, number of sensors. And this space will continue to grow. I don't know how many folks saw or caught up on the Apple event from yesterday, but it's just one example of a new piece of hardware, a watch, or wearable that has capabilities for mapping and needing to be offline and needing to respond to all sorts of contexts. So there's all kinds of different things you could do with sensors. Imagine if you can, instead of taking raster tiles that have already been drawn, perhaps by somebody else, and upload it to a server, and then redownload it to your device, you've just got the raw data, and you can control the style to every end and every customization you'd like to make in the application, entirely from the device, even when you're offline, if you've already got the data, the map appearance can change drastically. So with ambient light, when you're in a dark room, you could have a lower contrast map in night mode, but when you're in a bright outdoors, you could move into a high contrast map. You do this automatically in your application and not have to design two different maps. The color of it could change based on ambient light. Or pedometers, this is a big one. You could change the map info density or scale. So based on walking, running, biking, driving, flying, activity, smartphones are capable, if you allow them, to determine what sort of activity you're doing. It knows the difference between a walk, knows the difference between a walk and a treadmill, because you're not moving anywhere, but you're walking. So there's, you could have apps where they know the difference between a walk or a run, or a dog walk, and they could show dog parks or hide dog parks, or show water fountains, because they know, the context is there that you're not in a car, maybe this is the sort of thing that you care about. Or you're driving or running, you wanna show highways, or if you're running, you don't care about interstates, and the map could know this automatically and respond, and just be a fluid part of the application. When you're flying, you could show, you could just kind of filter it down, and show state and park borders, and kind of points of interest outside the window, and make an application that's a little more interesting for travel, just shows major cities, and kind of educates you about what you're flying over at 400 miles an hour. There's capabilities of things like onboard auto sensors. I mean, this is why I say mobile is beyond phone. This is tied to your phone. This is a little device by a company called Automatic that plugs into the analysis, the diagnostic port on your car, and then it talks to your smartphone, and it's able to gather all sorts of data. Not just about where you are, but whether you're breaking really hard, or whether you're accelerating really fast, or what kind of mileage you're getting, or how much gas is in the tank. And so you could do things like build a class of applications with a personal auto map that could colorize zones where you tend to speed, and it could just be this subtle kind of background part of your navigation application that is a gentle reminder to maybe lay off the pedal a little bit, you tend to speed around here. Or you could integrate other behaviors, like the hard breaking behaviors, and just things that decrease your gas mileage. Applications could style the map on the fly, and just interact with these inputs in ways that aren't possible with raster tiles without specifically designing all kinds of different combinations of maps, and all kinds of other combinations on the fly. There's an interesting API in iOS called Interesting Visits that's coming out in iOS 8. And it can basically record in an anonymous way areas that you tend to go to, that you frequent, essentially. Coffee shops, anything like that, and just provide those as useful contextual information. So if you give access to that API in a certain application, they could style the map, you could have a map that shades buildings based on the visit frequency, and this is a totally evolving dynamic thing, and it could just be something simple, like helping you explore new part of your neighborhood. You tend to glance at the map, you see a really dark area, you know that's the area you walk through a lot, or you're in a new city, and you're trying to get a feel for the patterns of walking and transporting around the city. The map could become much more of a living, breathing canvas, and it doesn't just have to be this 2D thing that sits there. It can kind of pulse with the life of the application and the context around it. I had to put this in, no self-respecting iOS programmer wouldn't make a nod to the Apple Watch introduced yesterday. This is the back of it, it's got a heartbeat sensor. Borderline creepy, yeah. But you can start to do interesting things with this. Fitness applications could change the map or change the actual appearance of the rendering of the run that you just finished based on your heart rate throughout the run, and just give you a much better, more fluid understanding of how the workout went. There's lots of things you can do with these sensors when you're able to directly drive the rendering on the device. So what's next with Mapbox GL? We're working on higher level APIs, meaning less code that you have to write, easier APIs to get more done quicker, and not have to deep dive into some of the inner workings of it. So that's a constant evolution. We're a little behind on the Android front with Mapbox GL, and that's a big goal in the future, is to get that further along. And your apps, hopefully, I hope people at least experiment with this and try it out, just download it and GitHub and tinker with it. We're wide open to suggestions for new features, new directions to go in. Bug reports as always, pull requests and code patches, gladly accepted. In closing, vector tiles on device rendering are here today in a lot of fronts. There's a lot of app potential, and what Mapbox is trying to do is build these ball bearings and developer tools to encourage people to explore this space. And hackers are welcome. I would love to see you on GitHub or email if this is something that you're interested in. We want this to be malleable and respond to input from the users and developers who are integrating it into their applications. Yeah, that's all I've got. These are a couple URLs, the Mapbox GL homepage. You can read both about the native side of things, as well as the JavaScript side. Our GitHub page, GitHub slash Mapbox, you can find all these repositories and filter them down and look for GL. We blog a lot. We're always trying to show off new ways of using our software and new directions that we're going in and experimentations. And you can feel free to email me as well. And yeah, at this point, I got a little time for questions. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Any questions? Time for a couple. Is this on? Yeah. I know you had mentioned that the web API is not currently geared towards mobile devices. Will it actually function on, say, Chrome and Android if WebGL is enabled? Yeah, we've gotten it. The question is, will the JavaScript variant of Mapbox GL work on mobile devices if WebGL is enabled? We've done some testing with it. It's varying levels of performance. I mean, it varies by device and environment. But there's no reason it could not work. My point was that we're designing a native stack specifically for mobile devices. But not focusing as much on the JavaScript as of yet for mobile. OK. Did you look at all, or are there any possibilities, for a non-WebGL JavaScript-based version, or you really need the WebGL for the performance? I spent a lot of time looking into an approach called Jecta written by Phobos Labs, which is basically using JavaScript code in a WebGL context and then mocking that WebGL context in native OpenGL, yes, a lot of time. And it just didn't end up working in the long run, both from a debugging point of view as well as getting asynchronous code to work. So like I said earlier, we tried a couple different approaches with being able to write once, run anywhere. And we just felt it was better to go with the C++ stack on the native side of things. I see a lot of people trying to switch. So I'm going to suggest one more question. Yeah, I was just wondering if you could speak to maybe performance as far as battery consumption is concerned when processing either vector tiles versus raster tiles, maybe? Yes, the question is battery processing raster versus vector. Haven't done extensive benchmarking on it yet. I would have every assumption that GL would take more battery processing power. So I mean, it's something more akin to a video game than a textual, like a web browser type of application. That said, depending upon your use of animations and transitions, it's not a continuously flowing thing like a live HD video or a video game would be. Once you've rotated and panned and then removed your fingers, things sit there, rendering actually stops. It's static. It's not still consuming battery power. That's another big advantage in writing it in native code, is we're able to just really tune things to use the least resources and have the most performance possible. So I would expect, yes, it would use more than equivalent raster rendering, because that's just showing already existing images. But it should be a totally doable thing. I should point out that on mobile devices and the native toolkits provided by each vendor, Google and Apple, they've been running vector for two, three or more years. I mean, that's their main maps, their main base maps have been vector, which is hoping to do it right and really lean performance-wise. Yeah, we have to wrap it up. Sorry, feel free to catch me afterwards. Thank you. Thank you.
|
Rendering maps from vector data is the next wave in custom cartography, and nowhere is this more important than on mobile devices. Modern mobile devices have high-powered GPUs for hardware-accelerated rendering and a multitude of sensors for environmental input, but also need to be keenly aware of network bandwidth constraints and have the ability to go offline. Mapbox is working on a new suite of mobile tools that render constantly up-to-date vector OpenStreetMap data into maps on the device. These maps can be customized completely client-side and even tap into ambient sensors such as GPS, compass, and pedometer. This session will show what's possible with this new open source toolkit, including client-side map style customization and influencing the user experience with sensor inputs, and will talk about high-level design goals of the tools and where they are headed next.
|
10.5446/31630 (DOI)
|
Hello everybody, thanks for coming and thanks for the brief intro to GTFS earlier Daniel. I'm going to talk a little bit more about that and co-present with my colleague Rob. We both work at Azavia, a small software company based in Philadelphia. And at Azavia we love advanced spatial analysis and the web. And I included the Chinese translation there because the project that we're working on is an international scale and I just wanted to underscore the scope of what we're doing with open data and how important it is even bringing it to China, which is a bit of a challenge. So the general transit feed specification, it's relatively young, less than a decade old and yet we pretty much use it every day, anytime we want to find how to get from one place to another using public transit. And from what I understand it's spent its early years incubating here in Portland. TriMet did a lot of work with it along with Open Plans that now convey. So the specification is very flexible. It allows agencies to add data to the basic requirements and the way that it's structured accounts for a lot of variations in how transportation systems, public transit systems are put together. They're all very different and so the specification needs to account for that flexibility. And it's also super normalized. We saw that list of text files earlier that all go into GTFS and it's just a super normalized database structure. It all comes in as CSVs with text file extensions. And the data, some of those text files have spatial data, some of them have temporal data and then it gets even more complicated than that. I've spent a lot of time working with spatial data and so I sort of approach problems that way and when I saw this I thought, yeah, I got this, no problem. We've got lines on the map, deal with that all the time. This is Boston's transit network. All of the stops, put those on the map, points, we got that. And then the temporal aspect of it, I didn't realize how complex it was going to be. For every stop, any time a vehicle stops at a stop, there's a row in the stop time table. And then the sequence of stops are organized into a trip. So a route has multiple trips, which has multiple stops, but the stops, there's only one stop. They have all these different times. It started to get pretty complicated. And I'm talking about this to underscore the complexity and why some of the software that we're using is really necessary. To go beyond just the temporal aspect of it, transit systems operate in a cyclical fashion. So Sundays are Sundays for the most part, Mondays are Mondays. But it's semi-cyclical because then you factor in holidays and construction schedules and all of that is built into the GTFS data, hopefully. And what we end up with is an intensely granular focus on time and space. And so it's a particular date at a particular time. And this is important when you're doing trip planning. And so this is a Asheville, North Carolina bus systems for one day, sped up really fast. It's kind of frenetic. And I put this together to try to understand where, what was being represented here and how we could work with it. And it's getting towards the end of the day. See the system slows down. And then it's nighttime and no more service. And other folks have done much nicer visualizations, much more complicated than I have. This is New York City. And I think it's sped up about 10 times. I recorded this a couple of nights ago. And so we've got the different routes that are color coded, multiple trains on the same track going in different directions. And it's a really big sort of data concept to work with. So what are we using it for now? Pretty much what it was designed for, which is trip planning. And so Google trip planner, open trip planner, they take this schedule and spatial data and figure out where you're starting, when you're starting on a particular day and time, and then using the graph, figure out how to connect that route to get you to where you want to go. But what else can we do with it? This is a spatiotemporal model of a transportation system. So we can ask additional questions. We are working on this international project where we're building software to help transportation agencies plan and optimize their systems. So from accessibility and performance perspectives, these are the questions we want to ask. And a lot of them can be answered with traditional GIS methods. So we have buffers of stops for the T versus the rail. We can count the population that lives within a certain distance of the stops, consider them served, look at subpopulations like lower income folks. And then we can also sort of look at the frequency and weight that by the population at different times of day and different times of the week and ask some questions about how the system is actually performing and then compare that to some scenarios of ideas on how to improve it. We can also look at performance metrics like the speed or the number of vehicles per route, the distance between stops, the amount of time it takes to between stops, and then look at which routes are doing well, which ones are sort of outliers or slow. These are also answers, hopefully, that could help transportation planners make a better system. We have, so the software goes and calculates a number of different metrics and then provides feedback to the people using it. There's one of these metrics is a lot more complicated and we couldn't use traditional GIS approaches. A lot of this is being done and it could even be done in SQL, like directly in post-GIS. That's some of these I roughed out that way. I'm going to talk about that more complicated indicator. I'm going to pass it over to Rob. Thank you. Yes, I want to talk about how we're calculating travel sheds and statistics based off of travel sheds using some software that we created called GeoCharles Transit. Once we have GTFS data, we might want to combine it with other data such as open street map data and say population data to answer questions such as how many people can arrive at an area by 9 a.m. traveling for an hour or less by public transit. That might be a good indication of how well the public transit system is serving, people being able to get to a job on time at that area. Once I can answer that question for a range of areas, how do I compare those different values with respect to that transit access? To begin answering that question, we need some data. If we had a population data set perhaps in a raster form where there's a population count per raster cell, we can eventually do a zonal summary of that raster. What a zonal summary is is just taking the values under a zone and aggregating them in some way. The picture you see here is a zonal summary created using GeoTrails for another application. I'm not going to go into what GeoTrails is right now, but if you're interested, I'm doing a talk on it tomorrow during session two during the invited talk. In this zonal summary, we actually have a polygon that is drawn by the user on the map, and then there's a summary of the zone. We don't want the user to draw a map. We actually want to create a travel shed and then do a zonal summary on that. A travel shed is a description of the areas that can be traveled from and to or traveled to or from a point in a given amount of time. What we see here is a travel shed in Philadelphia. Specifically, it's from leaving from the point of the marker, traveling a maximum of one hour at 6.30 p.m. on a weekday. The colored area is the travel shed, and there's different colored areas that represent the travel sheds for 60 minutes, 50 minutes, 40 minutes, and less. This was actually generated by a demo application of the GeoTrails transit capability at transit.geoTrails.com. You can go around. It's interactive. You can play with it. It's kind of fun. How do we generate these travel sheds? We will get into the graph theory behind actually generating the travel shed. First off, we need to look at the OpenStream app data and the GTFS data in a graph structure. It's actually a time-dependent weighted multi-digraph, which is a lot of prefixes on graph. I'm going to cut that up and explain what that means piecemeal. First off, I'll explain what a graph is. This is graph 3, 101, the quickest version of such a thing. A graph is a composition of a set of vertices or nodes and edges between those nodes. The graph we see here, there's six nodes are labeled 1 through 6. There's edges between them. There's an edge connecting 6 and 4, 4 and 3. What a digraph is, or a directed graph, is a graph in which the edges are directional. There's an edge connecting 3 and 10, but there's no edge going from 10 to 3, the arrow points 3 to 10. You can think of it as a one-way street. Weighted graph is a graph in which there's weights assigned to each of the edges of the graph. If we look at this graph and think of A as representing an intersection and C as representing an intersection, there's an edge connecting them that has a value of 5. Perhaps that's 5 minutes. If you walk from A to C, it would take you 5 minutes. Then a multigraph is a graph which allows more than one edge to connect nodes. In this example, if we look at vertex 5, you can see it, and say that's an intersection, vertex 1. There's two edges that connect. There's one with weight 6, which could perhaps mean if I walked, it would take me 6 minutes. Then an edge connecting that has 3. Maybe if I take the bus, then it would only take me 3 minutes. That's really dependent on when the bus actually leaves and when I'm at the node. That's where the time-dependent nature of a transograph comes in. That's really an interesting specific aspect of transographs that makes the computations on them a little more difficult. If I had a main street station, a Broadway station, and I had a GTFS specify that there was a bus that ran from 9 a.m. to 10 a.m. every 20 minutes and took 10 minutes, you might ask me how long does it take to get from Main Street to Broadway, and I would not be able to answer your question because I don't have enough information. If you say starting from 9.15, how long does it take? I can actually give you an answer. It's going to take the 10 minutes of the bus ride plus the 5 minutes you have to wait for the 9.20 bus. Now that we have this structure in which to view our transit system, what we'll want to create off of it is a shortest path tree. A shortest path tree has a specific point, a starting node or an ending node, and answers questions like, what is the quickest way to get from A to B? There's two types of shortest path trees. There's departure time shortest path trees, saying starting at time T, how long does it take to get to the other nodes in the graph? Yeah, starting at time T and starting at this node. An arrival time shortest path tree would say, I want to get to this node at this time, so at what points would I leave the other nodes to arrive at this single node? There's a lot of algorithms in graph theory that generate shortest path trees. This is an animation taken from Wikipedia about the Dijkstra's algorithm. This is an algorithm that can tell you what's the shortest path from A to B, but inside that algorithm it actually generates a shortest path tree that tells you what's the quickest way to get from the start node to the other nodes. We can modify Dijkstra's algorithm to generate shortest path trees on these time dependent multidigraphs and create a transit system shortest path trees. This visualization of a shortest path tree is of a transit system, and the thicker lines mean the shortness of arriving at a point on the transit system. This is a visualization that was actually created by GraphServer, and it's featured on the GraphServer page, and GraphServer is an open source multi-modal trip planning engine. The last commit on this project on the GitHub's master is early 2011, and the reason for that is that all the development sort of shifted to another project called Open Trip Planner, which we heard about, which is a really great, largely actively developed transit routing engine software that uses open-street map data, uses GTFS for journey planning. It has a batch analysis mode for doing transit network analysis like we're talking about, and it has some sophisticated graph theory shortest path algorithms, including A star and an optimization called contraction hierarchies. This begs the question, why create GeoTrails Transit when there's already this amazing trip planning open source software package, and the reason is performance. When we were tasked with creating travel sheds, we had a couple spikes that use Open Trip Planner through its RESTful endpoints, and then also just as an imported library, and it just wasn't getting the performance that we needed. We created GeoTrails Transit, and it gave us the performance we needed. What makes GeoTrails Transit fast? It's the way it represents the graph in memory. Open Trip Planner represents its edge weights dynamically, which means as it traverses the graph, it calculates the edge weight based off of a state object. It looks up specific user options, and has a lot of conditionals based off the edge type that compute the weight, which makes it really, really flexible and able to answer a wide range of transit questions such as, like, what bus line should I take to get from point A to point B? GeoTrails Transit, on the other hand, loses a lot of that information and does pre-processing to pack the weights of the edges into an index data structure that has a really, really fast edge weight lookup for incoming edges and outgoing edges. We actually packaged our graph into this tight data structure that has the vertices indexed base 0 to n minus 1 that indexes into one array, that indexes into another array, that really just gives information about incoming or outgoing edge and the start time as seconds from midnight that the edge becomes valid, the weight, which is the duration in seconds of traversing that edge, and then the destination node. We lose some flexibility in what we can ask the transit graph, but we gain a lot of performance, and we can still answer a wide range of questions, including generating travel sheds. So to get back to the original question, you know, how many people can arrive at an area by 9 a.m.? The solution to that is to generate the arrival time travel shed for that area and then do a zonal summary of that travel shed on top of that population layer. And to answer how we would compare those relative areas, we would generate that travel shed zonal summary statistic for each cell of a raster and then color the raster based on the values, based on some color ramp, and then paint that on a map. Now we have sort of a heat map of what areas are better served by this statistic than others. Because rasters have a lot of cells, it's a lot of computation, you can kind of see why we need the travel shed generation to be as fast as possible. So in losing some information about the transit graph, being able to generate the statistics like these travel shed statistics, very quickly we're hoping to give people doing transit network planning or schedule optimization a wide range of travel shed statistics that they could quickly iterate over if they're modifying the GTFS and trying out different schedules so that they can, you know, make our transit systems better served the public. That's it. Thanks. Any questions? No punch line. Punch line. Yeah. I love it. It's in development. What if you have data that's like updating continuously? Can you take that in? Like streaming GTFS updates? Yeah. Well, so preliminary benchmarks have kind of placed the generation of a statistic like that with the population statistic. Around five minutes we're optimizing currently to get that lower. But so the streaming would probably have to be like every five minutes. And that's after the graph generation. So there'd have to be a process of streaming and the transit graph changes and then generating the new statistics from it. So it's possible, but I don't think that there's, we're not on a path to generate that sort of stuff in real time to actually just see it change on a map, which would be awesome. But maybe on a big enough cluster, a big enough cluster. All right. Thanks. Thank you.
|
General Transit Feed Specification (GTFS) data is the open standard for representing transit systems in space and time. While developing an open source planning application for public transit agencies, it became clear that processing speed was the primary impediment to calculating transit coverage indicators within a reasonable time. At a glance, GTFS is just a set of simple CSV files organized relationally with key fields. But transit systems are far more complex than just spatial data for routes and stops. They need to be able to model spatial-temporal relationships embodied in transit schedules as well as semi-cyclical and shifting schedule patterns. Additionally, the specification is flexible enough to represent many different approaches to operating transit systems and the same system attributes can often be represented in multiple ways.While some transit system metrics are fairly straightforward to compute, certain public transit system metrics are best modeled as "travel shed" represented by raster coverages or isolines derived from them. The GeoTrellis Transit project is an extension of the open source GeoTrellis framework and was created to calculate travel shed rasters using GTFS and OpenStreetMap data. GeoTrellis Transit accomplishes this by creating a time-dependant graph structure that can rapidly perform shortest path queries at a given time of day, based on the public transit schedule.The challenge in developing GeoTrellis Transit involved designing a time-dependent graph structure that contains information about how the nodes connect at any particular moment in time during traversal. Shortest path algorithms on time-dependant graphs need to take into account arrival times at any given node, as well as wait times until an edge becomes available. This makes fast calculation of shortest path trees on time-dependant graphs difficult, which GeoTrellis Transit optimizes using a novel data structure to represent the graph.This presentation will introduce the GTFS standard and identify where difficulties may arise, especially for large systems. It will also describe how GTFS, OpenStreetMap and GeoTrellis Transit can be combined to build a fast time-dependant graph structure that can then be used to create time-based shortest path trees and travel shed rasters.
|
10.5446/31631 (DOI)
|
Felly roiSome'r fen짜ogi dim yng nghi'i hwn yn ei enhance dまずcyr wy kaldur pan dwy'n gael y caelwch Wait Huaifwング. Ieineis i'w Llywodraeth Cread naweton Dnos, mae Creadledoinnir bron o Llywodraethスfwrdd. Mae'n astud ychllyfun i gwybodaeth mawr i farms o Consta hace Teddy, felly gyrwch gyrwch yw lliddfa cynllunol alongside dna i Gyrwm. Mynd Mell os hanr такого messagingfer cpped, i gyrwch bod y glowchimun yn fyn yma cynghwil ein Lendaf, Aura Traveling Yn Don, CHEERING a fel y norm wedi cael ei wneud, mae'n meddwl am 8 cwysig. Felly, ydych chi'n gofio ar y codi, mae'n meddwl am 1 hwn i'n meddwl i'ch gyd yn ymddangos. Mae'n meddwl am 64hau. Mae'r mapau CS yn ei wneud yn ei gweithio, ac mae'n meddwl yn ei wneud yn ei wneud. Mae'n meddwl yn ei wneud yn ei wneud yn ei wneud yn ei wneud yn ei wneud yn ei wneud. Y gloedd Cymru telewyddon yn ynes i gynnig i unrhwntio i'r seif hydf spurben, yn cael Wales bodies gyda Llywodraeth Cym Cloedderau, ynactus ar god New York. Mae'rfits sefyd裝ol ergor acيم gwahanaidd ar gael gwahanol, ac mae'n f annoying bod yr edrych yn èw dun wahl gydydd i gyn calculation brothau. Pw nad nad wedi yn porod o�� y llyvio ôl i pystybch y link. Halwch chi lei, Mae'n throughr 5900 o diwylliant h circumstance hynny, a hyd yn gondol fy pan ydi defnydd wych? F refreshed y froma yma elwedd hwn wedi'i cael ei tymonом ma th不管 cy된 ad sposób yw milgo! Yn hyfryd gyda fras焉 meantime yn gyLiamadeysWill hill i m EP Him 就是 psychon towards hawshawdd, sinister a dod polwwch ben Cymru yn yn unidd的 toyn gallu f pH P models at ddd ingredientwch o brakes cyfredlaidon ni héi hui dissolve... Bydd e'n iawn i lineisi ytest ag yr hyffordd Liaethio Ff Dassiau ymddyliau o'r ddphbwynt Roadild Siro 거기 yn ymddbig. Dw i sut mae G Fredd抽blur C-S y sam Beth fy neby pan eitolaeth ein per reson Gall fydden officio Dog Datgwyr Epo tion Sh180 Felly efallai y wybodaeth iechyd yn llwyn Llywodol colli iechyd mor ynerysيا i fod efficiency C leaning sens, dyno efo'i adgenion o'niaeth gyda wedi ymlaes o un rubid o'n sylfa ac canon wiz Hoffan. Rydyn ni'i gweithio cais cynyddoble datn iawn, rydych yn meddfodd siaddydd felly o'r newyddau chi wedi meddwl y cymhysylltu gamigau fel fanwy documentary a chi'r syniadau regy innovai wedi serv Football Mediaforell o Gymra frog wedi fi fy loed hynny a ddim wedi boel yma. Fellydat upongo, atrad yn meddwl â'r perth alliedd F cioèinell mount gyda'r ffyrdd yma ar y cyfnod y system. Y ddyn nhw'n ddweud y gallwn i'w ffyrdd yma, yn ddod i'r ddweud yma, mae'n ddweud yn fwy o'r rhedeg yn GIS. Mae'n ddweud yn ddweud 3 dimesionol a'r ddweud yn ddweud yma, mae'n ddweud yn ddweud yn ddweud yma ar y 87. Mae'n ddweud yn ddweud yn ddweud. Mae'n ddweud yn ddweud 25 gyrs. Mae'n ddweud yn ddweud yn ddweud yn ddweud I'n mae CSE Mae estate yn oed melbyסcau llwgrŵn yng ngdeithas honno페ch yn mynd yn caelEEEDAME 뭐� Walles o'r erwag os y f gemaakt gyda'n nod gynnig split Malus? Mae'n eggs,ohl genius Mae'n eitryd fel féo Feir Proses gyda'r férif, mae er Итак다는 hy number Tyne'n rwyf, mae'r ysgwil wedi'u chyfnoddi, meddwl yma chi'n y ddweud â felly byddai bahad 포'r casually am denon. T較existing o Gweithluer Castencia a Fle Nion yna nid ydych chi gyd<|ta|><|transcribe|> Ac eich taru arewydd yn cael ei meddy intric i ran eu ddalol Fysur gwent ar waeth, ac yn fem graveyard oedd mae'r Cyforth yn'nwad brood du Davud yng或ch a'r anAM hwn yn ei rhywfach yn gwen Speaking So in that moment it is safe. We are very interesting as the library, it matches some of the customer requirements that we will be going into in a minute. We are going to try and make these libraries thread safe. Which means we can use it in other APIs such as GDAW. So who are Clowignment? This is the only sites I have on Clowignment that we are not doing it as a business place here. I would like to give you some background. We are contributing to Apache CouchDB. It is the JSON document store written in Erlang. We offer distributed databases as a service, and we are massively scalable and highly available. We now offer advanced JSK databases based on CSMAP and Geos and the Externative Spatial Index. One of our key players is synchronisation. So, synchronising from a server down to a device. On the right there we see the so-called Quorum model, which if you are familiar with Amazon Dynamo, you would understand. That is that you write to one node, and then you have three copies of the data, node two, node three, and node four. That gives you the high availability and consistency. Well, not consistently, because you are eventually consistent with the partition tolerance. So you could lose node two and node three, and your data will still be available on node four. So how do CloudNet use CSMAP? Well, the implementation is in Erlang, and we wrap it up as a port. So what is a port? A port is just an Erlang managed process that operates over a standard IA. By being an Erlang managed process, it means that if that execute was to die for any reason, it would get restarted. You cannot kill it. Erlang VM will also spawn as many of these as are required to go and match the load. So 10 users, you could go and spawn another process, and that is all configurable. This means that even for CSMAP, it is a process log, don't-fetch-safe. It doesn't really matter for Erlang. We are using Erlang green threads, and we can spin up as many of these as we need. Because CSMAP is a very, very small footprint, that doesn't matter either. We can just go and keep spinning them up. That process is running all the time, listening on standard IA, so we are not starting it every time. It is just receiving a request. We actually made this open source, so you can go and download this from our GitHub page. So one of our key requirements was that we wanted to better perform our operations on the ellipsoid or DF. We didn't want to assume that there was a sphere we had to do on the ellipsoid. If you read the CSMAP documentation, it says, the solution of the geodechid inverse problem after Tiv and Zenty modified Reinsid's method with Helmut's political terms. It means basically do a calculation on the surface. Don't assume that we have a sphere. It was great working with some of the IBM teams who were very, very proud of their Zenty algorithm. We were just saying, yes, the CSMAP is probably domain and you have it in Cloudant already. So how does it actually look? I thought I'd dive into a few codes in a bit just to show you how easy CSMAP is. Please do ask any questions at the end if you have any. You can read the code on GitHub too. If you do a convert, you see that the CSCover function takes in a source, takes in a key, and takes a xyz as a coordinate, a coordinate array. So it's going to convert automatically from that source definition using a key name, xyz, and give you the result. Then we need to get the ellipsoid. So here we get the ellipsoid with a key name, and that's going to give us the values there of the radius and the eccentricity squared. Now what do we do? We're using Geos as our geometry library right now, so we're going to create a circle, and we're going to create an ellipsoid. So we need to calculate the radius or the major, minor data ellipse. Yes, we're fully aware that internally Geos is approximating that with a polygon, which you can say whether you want five, ten, fifteen points to approximate your circle. But for the US customer, just calculating the radius exactly, or calculating the major and minor is sufficient. So you don't want to assume that this is fair because then you're going to have a lot of inaccuracy. But doing the calculation on the ellipsoid, you're going to get a lot more accuracy, and we're going to give you a demonstration on that. So here we've got the azimuth calculation here. We're taking xyz, we're going up to 90 degrees, we're going up to y axis, x range, and we're going to get the result. Then the next one, we're going along the x axis to get the y range. So what's that? One, two, three, four lines of code, and you're doing a very, very complex GIS function. OK, so I apologise about the layout there, but I'm going to give you a quick live demo of how this actually works. And I don't have Wi-Fi. OK, give me one second, I just got to log in. And now I'm logged in. So here's a leaflet vector layer example using Cloud and as a backend spatial database. And this is also open source. You can go and download this JavaScript library. So that's just really something very, very simple with a polygon. So you're just showing the capabilities here of GIS and CSMAP. So we should get some results back in a minute. Let's just try that one more time. OK, I'm having some data problem, availability problems here. There we go. OK. So that's a bounding box. And let's do the same with a polygon. And then I'm going to show you how that code just worked with a circle. OK, so that's working, you can see. Now let's do a circle. So, Ian Various is a Cartesian playing here on my web browser. On the server side, I'm actually going to do this calculation on the ellipsoid of the app. OK. Now I'm going to do a nice big circle to get plenty of data. So you're going to see that some of the points are just outside the circle. And that's the approximation thing we're talking about where actually internally, GIS is using a polygon to approximate the circle. But you will find that the radius, the actual distance there from the center to the outside, has been calculated on the ellipsoid. And you can also do, yes, with an ellipse. It doesn't just have to be a circle, so you can do a major and a minor on an ellipse. OK. So MetaCRS is an RSGA group that brings together GSTS, ProchFOR, CSMAP. I don't forget any of our projects. But it's the aim is to coordinate all its system activities as one project from a RSGA point of view. So when I write to the CSMAP mailing list, I'm getting answers back from people from ProchFOR as well. You know, there's quite a lot of harmony there. It's a fairly active mailing list. I say it's about two or three emails a week. And I encourage you to join. One of the primary reasons why I submitted this talk is that I'd like to see more interest around CSMAP. Currently it's Autodesk, Safe, Normalson and Me. I'd like to go in a spanata a bit further. It's a very, very powerful library that's very easy to use. And as soon as we add thread safety, I think it'll be on a power of ProchFOR. So I said it was late at the end, I'll tell you about the prices. So currently I have two Raspberry Pi's. We also have a couple of pedals that you can take as well. So please come by our booth and drop your business card off. And I'll be there and be able to talk to you as well. Do you have any questions? Is the footprint of the CSMAP API really large? You said there's like 13 functions or whatever that you need to do stuff. Is there a gigantic list of functions to be able to do things that are all public? Is the public API as big or is it pretty compact? It's been broken out into a high-level API, which is about 10 functions, and a low-level API, which is a lot of functions. So for the most users you use the high-level, they can only do something very specialisingly down to a low-level interface. The other question I would have is, does it do data transformation? It does, yeah. Where do grid files and whatnot to support all of that? Yeah, that's actually quite sizable. That's about 650 megabytes. We have to deploy that using Chef on our clusters, so we push that every time. But they are compiled and they're not being read as plain text files. Hi. Do you support some of the work for all the items? Currently CSMAP isn't, but that's been something that I'd be interested in adding. Yeah. Hi. Are there any projections for the regulatory bodies or is it just for... I don't believe there's PDS support right now. So no, I don't believe that's the case. Any other questions? No, thank you for your time. Ye meau!
|
CS-Map is often used as a reference but has not been as widely adopted as proj4. This presentation describes how CS-Map has been used in a distributed geospatial database for big data.The presentation describes the benefits of CS-Map, in particular its whole earth support and also it disadvantages, primarily it is process locked.The aim of the presentation is to demonstrate that having more than one coordinate system library is a good thing and to encourage development of coordinate system libraries.
|
10.5446/31634 (DOI)
|
I just wanted to go over sort of as every project manager really should, level set expectations with either the client or the audience, right? So this is not a heavy technical presentation. It's really intended for managers, decision makers, curiosity seekers, those who are interested in plows or those generally interested in what happens in New York City. So if that's not you, I won't feel upset if you go and go into one of the other presentations. If not, hopefully you can stick around and I'll entertain and inform you. So with that, let's plow ahead. I wanted the sound effect, but I couldn't figure it out. So why should you listen to this presentation? So I'm probably preaching to the choir here if I start spouting out about how we use open source, right? From my perspective working in city government, I thought it was an interesting story to show or demonstrate at least how open source can be used in an otherwise very conservative organization. I see in the federal government there's a fair bit of open source work, but a lot of city and state governments is a real reluctance to it because obviously governments are often risk averse and there's a perception that open source is, you know, everybody's managing this code, it's all willy nilly, you know, and you should be really careful. But there's also a prevailing belief that if you buy shrink wrap software or commercial software with paid support, you get around it, you're getting a better product. Well, you obviously need to do the same due diligence in selecting whatever tools that you use, whether it's closed or open source. And so New York City is an example of using open source to develop an application that's quite a fairly high profile, gets a lot of use, and that is plow NYC. So very briefly I work for the City of New York, Department of Information Technology and Telecommunications, it's a mouthful. I bring that up because we're one of 50 some odd agencies within the city. Our mandate is IT services. We don't plow the streets. We provide services. Specifically I manage the mapping, the GIS group within the city. And my role in doing that is I manage a group of about 16 people split between developers, your traditional analysts, systems admin type folks. We manage a lot of the geospatial data for the City of New York. We also build and support applications. So what we build is what we support. So we're very careful in what we select and how we build things. So this situation is what you would have seen looking at your window on December 27th, 2010. It's been referred to as snowmigidian. It was sort of the perfect storm. The mayor and his first deputies were all on vacation in the Bahamas. The snow rate was very heavy. The accumulation was really a high volume. And the temperature was perfect because quite often the streets are warm and it doesn't start accumulating on the streets very quickly. This was every variable played into a bad storm. Really difficult to plow. People lost their jobs because of it. Ambulances got stuck, buses got stuck, so on and so forth. So the project. If you fast forward about a year, January 2011, Mayor Bloomberg does a weekly radio show and the interviewer starts asking him questions about snowmigidian and the 16-point plan that the city put in place. By the way, are you providing any information to the general public because they might want to know if their streets have been plowed or not? He's like, yeah, we'll be doing that and we'll be delivering something this year. So I was in a meeting within an hour and found out that, yes, we would be developing this application. So we hadn't been. But anyway, we were given, it was January, we had to deliver something by the end of the winter, so it was approximately six weeks to get this out of the door. So with that, the first thing we started to do was think about things. So what exactly should we develop here? Because really, all we had was a soundbite from a radio show. And so if you want to track plows, you want to see what the progress of plows are like. But why don't you just look out your window because you have visual evidence just looking outside your window. Yes, it's been plowed. Do you really need to look at a smartphone or look at a web browser on your desktop or laptop to see if the street's been plowed or not? So we're scratching our heads trying to figure it out. And initially, we came up with some objectives of what we would try to achieve. So what we really needed to do was convey the snow operation progress to the residents and visitors of the city of New York. And we say snow operations because they use both plows and spreaders. And we needed the ability to handle a large volume of traffic. So this is a sort of incident where there's a snowstorm, the application gets activated, everybody is going to come on in. The duration of the snowstorm is maybe a day, they'll leave it on for a bit longer just to show the streets continue to be plowed. But it's a very short event, a lot of traffic at any one given time, so high availability, support a lot of applications, excuse me, users, really keep it simple and straightforward, right? Convey information in an understandable way to the lay public. And it was really about conveying that information and not really about the technology behind the scenes, right? So we wanted to deliver the minimal viable product, right? Go out with the initial release, meeting most of the undefined requirements and then go out with future releases and build upon that, right? So it was a total team effort. I'm lucky to manage quite a cadre of very good developers. So we did develop this entirely in-house using the tools that we'd already been comfortable with so learning on the job, new technologies when you have a very aggressive timeline is not advisable. And on existing infrastructure, which was frail and aging but we had to make it work for at least the first winter while we harden things for the next. So essentially the project became two separate efforts, right? You had the mayor's mandate and then you have a variety of stakeholders, Department of Sanitation, plows the streets, Office of Emergency Management, manages snow emergencies, they handle communication and then you had City Hall and then you had my team with the development effort, right? So one, taking more of a waterfall, let's meet every other week, pontificate, talk, you know, throw occasional requirements out there and the development team taking more of an agile approach. We had daily scrums, we were actually building what they were trying to formulate in their heads, right? So the first thing we really did was let's take a survey. Let's see what the lay of the land is out there with what some of the other cities are doing, right? Seattle on the top, Chicago there on the right. Not to be critical but okay, so seeing acute plow icon dance around on your screen doesn't really tell you what's happening out there. It doesn't tell you whether your street's been plowed or not. It's cute and all, might make a better game but it certainly doesn't convey information. On the top over here, you know, they're plowing more buildings than they are streets so we looked at those and we realized there wasn't really anything out there that really helped inform what it is that we should be doing. So which brought, you know, some other challenges in place, right? So we need to realize a vision from a person we didn't really have access to. The mayor wasn't going to sit in any of our meetings. He just said we had to do something and we had to hope that we hit the target and we weren't too far off the target and then adjust accordingly, right? Multiple stakeholders, the dreaded decision by committee when decision were reached and they focus on minutiae like colors and things like that as opposed to like the functionality of the application. And what we really wanted to do initially was we would, let's track progress against the scheduled plow route, right? So plows are given a schedule of streets that they need to go out there, a route and here's what you do for the day and let's track their progress against what they're expected to cover. Well, those plow routes were actually in narratives, in word perfect documents to just give you some indication of how old they were, right? And if you read them and tried to map them, there were huge gaps between where they started and where they made left turns and it's, you kind of wonder how the streets actually get plowed if that's the narrative that they're following. So quickly we realized that wasn't going to work, right? So we had to come up with something else. We had the very aggressive schedule and then we needed to handle a large volume of traffic in a fairly short period of time is what we expected, right? So we started doing some initial visualizations of the GPS points, right? Creating vectors and showing those arrows there or the bearing of the vehicle. And the one going down, second avenue looks pretty good. The one on first avenue, either that the driver of the plow was drinking that day, either the person drawing that fake line had maybe too much to drink that night or there was a multipath error there. So it was safe. But anyway, we looked at that and realized that's similar to what Seattle was done and this is not going to really be very helpful, right? So we realized we needed to take an entirely different approach and what we decided to do and this is not going to let me do my transition because it's a PDF, but anyway what we tried to do was or what we did do was we snapped those GPS points to the street segment they should have been plowing and we had some intelligence in there in terms of looking at the directionality of the street, the bearing of the plow, looking at previous segments to ensure you weren't automatically making a quick right turn, right? So there was intelligence in that algorithm, snapped it to the nearest street segment and then we created time buckets. So we're showing here when the street was previously plowed, whether it was plowed in the last hour, up to the last 12 to 24 hours and this was a decision in terms of the time bucket that took forever for the committee to make. But anyway, this is what we went with. So we get the GPS feed and this is the first iteration, right? And so what happens here is we get the GPS feed, we're snapping this to that street segment, which has a unique identifier. Every 15 minutes we're pulling that GPS data store and looking at the last time stamp on a street segment and then putting those into the different buckets and then rendering it. And I'll give you sort of the technology between how that's being done. But this was probably release 1.5, right? It was, there was an earlier release, but this was, this still sort of is somewhat emulating the sort of GIS desktop with the concept of layers on their right and a bit more functionality would probably be wanted. All right, so this was the sort of more recent release, right? It's completely responsive, mobile compliant, right? Get away from having turning layers on and off, right? There's two different things that you can see. Snow vehicle activity and I took this screenshot recently so they weren't snow plows in New York City. But it also shows the designation of each street, whether they're a primary street, secondary tertiary meaning they get plowed first, second or third or all bets are off, we don't cover your street. So, so this is the current application. OEM has the controls as well as sanitation to activate. You see on the right hand side, it'll indicate whether it's active or not. Here it was previously at the top, right? And it'll tell you it's current as of what date time and then the next time the ETL runs full the data and re-render it. So this is how it looks on a mobile Android device. So syncing up again with my notes here. So these are the variety of technologies that we used and one really good anecdote that really is a real plus for open source. If we had went with one of the proprietary solutions that shall go nameless, we ran into a defect with one of the core products we were working on and Balanlis provided support to us. We have a support contract with them. We had an engineer on site within 24 hours. We had those defects corrected and posted back to Geo server within 48 hours and they gave us a branch of the code which we had deployed within, you know, three days. So had that had happened with one of those unnamed vendors, that would have been probably months, not days, right? So that's a real plug for open source right there. So these are the variety of technologies we use. We use Spring Bratch to write the ETL. You're probably most familiar with all the other ones. We use Akamaya as a content delivery network. So we're caching all the content at closest to the end user. So how does it work? Kind of went over that previously. But I had a very simple non-technical graphic that I did to explain this to managers within the city of New York, right? So here's the data flow and here's how things work because they're all like, wow, how does this really work? There are GPS devices within the plows which initially started out as being essentially cell phones and now we're embedding them with full AVL with dead reckoning. It goes up to a Zora database which is a partner of Verizon, right? Which then goes to NiceWin which is our internal wireless network. They have a data center where they get all the GPS feeds coming in. At the red line is we're really kind of my team took over, right? We wrote the snap to grid but we deployed it to those NiceWin servers. Our ETL runs every 15 minutes. It's polling that database, looking at each segment and the timestamp on every one of those populates the ETL runs, populates the table. We have another view on that table that tells us all the segments were there where the timestamp had changed. We then send GWC request to render new tiles because everything's tiled. We serialize those tiles, store them on a disk and then all that content is then cached again on Akamai. The first time someone comes in, types for whatever street address, all of those tiles that are being rendered then cached. Then in the next 15 minutes, all that content is then coming from the content's delivery network and not coming back from our service. It's got multiple levels of caching involved just to ensure that we can handle a sort of volume of traffic that we see. We do see a fair bit. A good presentation would be nothing without a certain number of stats, right? You have to read the small print over here on the asterisk by yourself. I'm not going to read that one. Anyway, so theoretical maximum on a 24-hour period given the number of times that the ETL runs and the number of levels that we have, there's up to 192 million tiles that get regenerated in 24-hour period. Those servers are humming along. We get, during a big activation, we probably get anywhere from one to two million visits. So a lot of traffic's coming in. There's 10,000 kilometers worth of roads. If you think of lane miles, probably four times that for what they have to plow. There's 2,800 snow vehicles, mixture of plows and spreaders. So it's a large volume of GPS data. The GPS data is coming in every 10 seconds, but we're pulling the database on a 15-minute cycle. Lessons learned. Well, some of these were learned on the project, a lot of these were just reinforced. You always have to listen, listen, listen. When you're dealing with a committee, it's certainly a challenge. Communications always a key. And then in terms of more on the technical side, putting out the minimal viable product, especially in an aggressive timeframe is just really key. Get what you can out there and then add to it. It's easier to add than it is to subtract, right? And then go with what you know, especially in an aggressive schedule like what we had. Don't pick technologies that you haven't used before or you're experimenting. Use what you're comfortable with. And hopefully you have processes in place that when an emergency hits, you're able to respond quickly and you have some qualified individuals behind you. And I certainly do. The URL is this, but Wi-Fi has been flaky down here so I'm not going to even attempt it. But there's really nothing to see. But if you want to, I'm sure I'll put this presentation out there the next time we activate during a snow emergency, you can check and hopefully things are moving along. And hopefully in the foreseeable future when we do actually have snow routes, we'll be able to track progress, where they've been against where they're expected to be. So somebody can come in and say, okay, my street's going to be plowed at 3 p.m., right? As opposed to, wow, it hasn't been done yet. You know what's going to happen? So with that, that's the end of my presentation. Happy to answer any questions. Thanks. Yes? Yes? Yes? I mean, so there is and there has been cases where we snapped to the wrong street and it's been shown on the public and actually some of the papers in the city have written about that. But yes, there are usually multiple, it's being pinged, there are GPS points coming every 10 seconds. So usually there's multiple hits in any street segment, right? They're traveling quite slow when they plow the streets, right? And when they're going faster, they're on the highway. So we get usually at least two points for every street segment. But instead of just snapping to the nearest street, we're looking where the plow has previously been, right? So if you're doing 15 miles an hour, you can't bang a right very quickly, right? Because you'll see it straight to a street and then it'll go next. So we hold a certain number of points, we then snap and then we move on. So we're kind of doing a bit of smoothing and sort of normalizing the data to ensure that we can remove some of the straight points, some of the noise, right? The really problematic ones are like when I showed you First Avenue, when you have a whole line that's completely off, that's when we're mistaken, right? So in that case, that was actually a write-up last year, we had falsely indicated that the adjacent street York Avenue had been plowed when in fact First Avenue hadn't. Everybody was complaining. They're telling us the street's been plowed. It hadn't. The fact of the matter is we shouldn't really need to do snap to street because if we had real AVL in these vehicles, that wouldn't be an issue and that's being implemented this year. So it was sort of one of those stop-gap measures that needed to be done and it lived a bit longer than we expected. Excuse me? Yeah. So when something like that happens, we have no microphone to hand out? Yeah, sorry. Anyway, the question was how do you defend something like that? For us, it was more we had to defend it internally to the mayor unless he defended it to the press. We had to defend what happened to the mayor and it really became a strong realization, hey, the current technology that we're using, the GPS technology is insufficient. We could spend a lot of money, try to improve the snap to grid and it's never going to work. The best solution is to get a real AVL. So we took one on the chin but we ultimately, as you can see this year, they're deploying new AVL. Yes, go ahead. Yeah, were the vehicles already equipped with the GPS or did that have to get installed for this project? It was being prototyped in a couple of errors and then it was quickly rolled out to all vehicles. It wasn't really a VL, it's more a cell phone in a steel box on the dashboard. So it was really low-tech. And then now that the data is being collected, is it being used to optimize the plowing? That's a really good question. So that's the direction that they're heading in. So the Department of Sanitation is sort of re-engineering how they do things and they're now going to digitize all the routes. They're going to hopefully we're tracking progress against those. So they're using it not as just a mechanism for informing the public but also to inform themselves and to improve how they plow things. The thing about the Sanitation Department is it's a union shop. So there's only so much flexibility you have in how you change the way they do things. And they all handle separate sections. So there's a degree of autonomy in each one of the sections that they're covered and there's this person out there with a walkie-talkie helping to orchestrate things. So yes, there will be improvements made but sometimes there are limiting factors, unions being one of them. Upgrades word for word purpose. We're going to Word Star. And then we might go to Word. Or we'll just go to Google Docs, screw Microsoft now. Any last questions? All right. Sorry if I spoke too quickly but with the technical difficulties I figured I'd have to make up steam. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
In the winter of 2012, NYC's Department of Information Technology and Telecommunications (DoITT) was tasked with developing an application to track snow vehicle operations. The DoITT GIS team was given a mandate to have the application in production before the end of the winter. Due to the aggressive schedule, our approach was to get something up as quickly as possible while enhancing and improving over time. Beyond the schedule constraint, additional challenges were minimal requirements and decision-making by committee with no clear business owner.Three major tasks were required to complete the project: scale the existing infrastructure to better handle the expected demand, determine an approach for communicating the information to the public in a legible and understandable way, and develop and test the application. The team quickly undertook a multi-pronged approach to complete these tasks within a roughly two-month timeframe.Of all the impossible tasks, scaling the infrastructure was the most challenging and difficult. High-profile application launches in NYC that come with press announcements tend to garner traditional and social media coverage and with that national exposure and demand. And although the application would have been a perfect candidate to deploy in the cloud, that was not an option. Additional servers were added and the application was optimized and tuned for performance. To do so, multiple-layers of caching were employed including GeoWebCache and a Content Delivery Network. In terms of visualizing the data, we conducted a quick review of existing public-facing applications. There were not many examples at the time with most cities choosing to show 'breadcrumbs' of a plow's path. We felt this method was not an effective way of conveying plow coverage; our objective being, to show which streets had been plowed and not to show where a plow had been at specific time. As such, we decided on visualizing the data by the time a street was last plowed. Five time-buckets were established and the street segments were color-coded based on the last GPS ping received on the segment. Every 15 minutes an ETL pulls the GPS data and renders tiles using GeoServer and GeoWebCache.The application, PlowNYC, was developed using open source and commercial software and custom code. These include OpenLayers, Geoserver, GeoWebCache, GeoTools and Oracle. Since its release, the application has been enhanced to handle greater traffic, support mobile clients and to simplify the interface. The presentation will cover these aspects of the project.
|
10.5446/31637 (DOI)
|
Just so everybody's clear, this is the OpenLayers 3 talk. They switched rooms. If you're here for the previously scheduled talk, it's over on the D side. Oh, do you see standard that's really 135 and 35? How's that mic? Is that loud enough? Do we have to talk down here? Louder. OK. All right, so Andreas was not mentioned on the schedule, but this is a dual presentation. Andreas Hochevar is my co-conspirator here, and we're going to both be tag teaming for this presentation. We're here to talk about the recently released OpenLayers 3 library. I'll give a little bit more introduction to that in a second, but I'm Tim Schaub, and I work at Planet Labs. And I'm Andreas Hochevar, and I work with, as a consultant, for boundless. So just two weeks ago, we finally released OpenLayers 3.0. It's been about two years in the making. It's a complete rewrite. Shares no code with the original two versions of the library. And it was a pretty massive undertaking. It was a community, a consortium of companies funded it, raised funds for the development of it, and we raised some community funds for it. And we really had ambitious goals, and it was sort of a herculean effort to get it to a stable point. We finally, just two weeks ago, marked a usable section of the API as stable and released 3.0. So the way this talk is going to work is I'll just put a goal up here, something you might want to do as an application developer with OpenLayers 3.0. And then I'll show a code example, how the API is used, in this case. And then we'll move into a real example of what it looks like in a browser. So the first thing you'll notice is that OpenLayers 3 has split apart concepts of a map and a view. The map is a collection of layers. It has things like controls or interactions that we'll talk about coming up associated with the map. And the view is the actual view state. So it has things like center, rotation, resolution, or zoom as a shorthand for that. But where do you get that layer from? There was a layer collection in there. So your goal might be to use layers, tile layers from a number of different providers. So you could use a tile layer with an OSM source. These are two more new concepts here. A layer is about the rendering in the view of the data. And the source describes how to fetch the data. So in this case, the data comes from this OSM community OpenStreetMap tile provider. And it's rendered as a tile layer. You could use that same tile layer with a proprietary tile source. So in this case, it's Bing Maps. And you put in your key and the style that you want and use the same layer for that. So here's an example of a number of different layers. There's an aerial from Bing. A couple of things I wanted to emphasize with this example are the tile cache. Each of those sources I mentioned maintains its own tile cache. So as I zoom in here and I pan to a new area that I haven't seen yet at this resolution, we've got good internet here. But you'll see that these lower resolution tiles from a lower zoom level are being used as intermediate tiles while the target zoom level tiles load. So the idea is that you never pan off the edge of the map, given that if you load your map at a low zoom level and then zoom in, you should never pan off the edge of the map and see any white. Another thing I wanted to emphasize is this, or point out, is a tile queue. Again, it's pretty subtle here with a fast internet connection, but there's a prioritized tile queue. So if you look at my mouse down here in the lower right corner of the screen, as I zoom in, you should see, you're not going to be able to see with this connection, that the tiles around my mouse are loaded first. So if you have tiles that were requested at a previous resolution, when you get to a new resolution, that triggers a rendering, and then the tile queue is sort of restructured with newer, higher priority tiles, and those are loaded. So. Now you saw Tim zoom in and pan around on the map. This was the default behavior for users to interact with the map. Now the goal is to give the user control and modify the ways how you give users control. For this, we have two packages in OpenLayers 3. One is interactions. Interactions are usually about pointer gestures that allow you to interact with the map, like when you drag the map pants, when you pinch zoom on a mobile device, you can zoom, and so on. For things like the plus and minus buttons you saw on the top left of the screen, that's controls. Controls have buttons or any visual representation on top of the map. A scale line, for example, has also an overlay on top of the map. Let's look at this, how it looks in a real example. So here, we have another map using the Bing layer. The plus and minus buttons here are for zooming in and out. So let's zoom out a bit here to see more context. While I did that, you saw the scale line here, which changes also as you change the resolution and also as you change the center of the map. Worth noting maybe that the scale is calculated for the center of the map. And then we have a tool here also. Let's zoom in again to where I come from, somewhere in the Alps. And then I want to know where I am. This is a custom control, which provides the simple locate button here. Takes a while for the browser to figure out my location. And as soon as I'm there, I'll be pinned to my location. Then another nice thing about controls and about controlling the view is to use two way bindings. One thing we can do in OpenLens3 is rotate the map view. And as you can see here, as I rotate using one of the mouse interactions, also this slider here updates. There's a two way binding between this HTML5 slider control and the rotation property on the map view. And there's also this additional control that allows me to reset the rotation to north up. All right, so everything we've been looking at so far, our raster tiles, and we brought those in with tile data sources and displayed them in image based layers. Now we want to look at working with vector data. So again, there's a split. You have a vector layer. And then any number of sources that describe how you're going to go fetch that data. So in this case that we bundled together the format of the data and how it is retrieved. It's addressed by this URL. And then I say it's a GeoJSON source. So that gives me a GeoJSON format to parse it. And I want to read it into EPSG3857, so Web Mercator. That will display these GeoJSON countries on the map rendered in the browser. Obviously you want to style it. We have a default style that you get, but you might want to style it to your own liking so it fits your application. So on a vector layer, you can set the style. Here I'm giving it a single style instance with a fill and a stroke. I've only specified the color here for both. You can specify the width, the opacity is specified in the color of the fill. In addition, you can call that setStyle method with a function. And that function will be called for every feature at every resolution that your map is rendered. It's not called during animated transitions between resolutions, but anytime you stop and you're not animating, this function is called with every feature. So you can decide how you want to style your features based on their attributes or the view resolution. So there's not enough room here for the code. But if the resolution was less than some value, I might construct a few symbolizers. Otherwise, I might construct different symbolizers. In this example, I've used that style function. And each time it's called, I get the area, the projected area of the geometries, and then the geodesic area of the geometries, and divide the geodesic area by the projected area. And then it's scaled. There's a linear scaling. Mike inspired me to use his HCL interpolate functionality between a brownish color to a steel blue color, based on essentially the distortion, how much bigger a country is, how much bigger the projected representation is relative to its geographic area. So as I zoom in, you can see there's also labeling going on here. So that's a scale or resolution conditionally labeling the features there. I've got a little stroke on the text as well. A nice thing is you rotate around. If you went to Eric's talk, you heard him talk about how we batch these rendering instructions. And this is all rendered in Canvas, where typically, if you just rotated the Canvas, you would get a rotation of labels. This is re-rendering with every rotation. As I zoom in as well, you can see that the font size stays consistent during that animated transition, as does the stroke width. So we're not scaling the Canvas. We're literally redrawing using the same batch of rendering instructions at every frame in this animation. So now we saw vector data rendered. But the real advantage of vector data is that you can let the user interact with the data, which means you don't just look at a dump map image, but you can really get instant information about what you're looking at without having to do another round trip to the server. Let me explain these two things that we're going to see in the next example. We're going to create a sort of a pop-up. And the pop-up is going to show you some information about the feature that we're looking at on the map. Pop-up can easily be implemented using any JavaScript framework of your liking. And to bind it to a geographic location on your map, we have a component in OpenLayer 3 called the OL overlay. You configure it with an HTML element that you have in your markup. And then you just add this overlay to the map. And as you can see here, we register a click handler on the map. And the map provides us a function that's called for each feature at Pixel. What this function does is it uses the hit detection that Eric has explained in his talk and gives me back all the features that are at the current pixel location that are passed here as a first argument. And then this function is called with all of these features. And in this case, since I'm looking at a polygon layer, I usually only expect one polygon at a location because these are countries and have boundaries and don't overlap. So I'm going to create a pop-up with the name of the feature in this case. And the most important thing here is to put the pop-up at the exact position that I clicked, I call the set position method here. In this feature interaction example that I'm going to show here, I do a bit more than just this. If you notice my mouse cursor here at the ocean where there are no countries, it's just a pointer. And as soon as I move here to a country, it changes its shape, notifying me that I can click here. So there's two handlers involved here, obviously one for pointer move and another for click. Now if I click here, I create this pop-up. As I said, it sticks to the location of the map. I can also rotate the map and will still maintain my pop-up here. And in this case, the information I display is also the calculated geodesic area of the country that I clicked on. All right, so we saw basic interaction with features. One of the other things that's worth mentioning in that last example that Andreas showed is we've sort of written a shim for the pointer event specification. If people know about this, it tries to generalize the different type of events that get fired on different devices. So you register for a pointer move event, for example. And that works on touch devices, works on devices with mouse, works if you have multi-touches, it works on different devices. So it's another nice feature of the library. OK, so the goal here is to allow editing. We've interacted with features, displayed some basic information. Now I want to modify the geometries associated with features. So that's handled in library with a combination of two different interactions. These could be wrapped together in an editing control. If you wanted to have buttons on your map or something, you develop a control for toggling the activity of these interactions. The select interaction, we'll see in just a second, maintains a collection of selected features. And then you can associate that collection of features with a modify interaction. So this says, I want to allow the user to click, click, click, and select a batch of features. And then I want all those selected features to be candidates for modification. So in this editing example, I'm going to zoom in here and show you the select behavior. So I just click a single time, select a single country. If I grab Sudan, Egypt, Syria, and Chad together, now all four of those features are in my selected feature collection. And now I can edit them together. So as I move around vertices, I'm getting the shared editing of vertices that are coincident between those features. Another nice thing, you see, as I approach this line here, there's a little hint that is given to me in the probably very small for you guys, vertex that shows up on that line. I can either add a new point, or as I get close to an existing point or an existing vertex, it snaps to that. So there's preference for modifying existing vertices. I can also add vertices. And again, this is modifying these shared boundaries between these countries. If, by contrast, I just selected one country and started editing it, I would have destroyed the topology of this region. And we wouldn't want to do that. OK, so that's editing with the select and modify interactions. So all the examples that we saw so far use the web Mercator projection that's commonly used on web maps. It's the one that Google Maps uses. It's the one that OSM tiles come in by default. And one thing that's really baked into OpenLayers 3 at the very core is support for projections. The one thing you may notice when you create an OpenLayers view is that you need to provide the coordinates in the map projection. And what seems like a thing that makes things more complicated is, in reality, something that really opens you to working with arbitrary projections. In this example that I'm going to show next, we're using sphere-mall-wide projection, which displays the world in a completely different way than the web Mercator projection. And in this case, I also pull in the Proj4.js library. Proj4.js is a library that provides transform functions between geographic coordinates and all projections that you can configure with this projection configuration syntax that's known from other frameworks that deal with reprojecting. To get such a projection in OpenLayers 3, you really only include the Proj4.js library. And then you use the olproj get function to get an instance of that projection. Usually, you will also want to set the validity extent of that projection. You can find these extents on websites like epsg.io. And this allows OpenLayers to decide what sum level zero means. Otherwise, you have to deal with resolutions, which is, believe me, more complicated and less natural than just providing this extent that you copy and paste from epsg.io. So let's look at this example here. Worth noting is the countries file that we load here is exactly the same that we used for the other examples. And it was even transformed on the fly for the other examples from geographic coordinates because these geochasen files and also KML files usually come in geographic coordinates. And you need to transform the geometries to fit the projection of your map. And in this case, we just transformed it to the sphere mole-wide projection. And you can see that this really looks completely different from a web marketer map. You can also see in the bottom left the mouse position. Here again, I used the coordinates in geographic using latitude and longitude. Although I'm looking at a map that uses a completely different projection. So this all works through the transform functions that are provided by Proj4js. In this case, if you don't want to use Proj4js or if you have a transform that's not available through Proj4js, you can provide your own transform functions as well. Now, to take this a step further, what we just saw is although it looks like a globe, it's still just a map that you could also display on paper. And I'm going to give you a sneak preview now of something that's going to be released later this year. It's a library that connects the cesium framework, which provides a 3D globe with many great ways to interact with 3D scenes with OpenLear3. And this is a wrapper library where you configure your map in OpenLear3. And all you have to do to get the same map displayed in a scene on a 3D globe instead are these few lines of code. In this case, this is bound to a button that targets the 3D globe. So you can switch between a 2D map and a 3D map. Let's see how this looks. So this is just a 2D map. I'm going to zoom in a bit here to the area where we are. And now I switch to the globe. You won't notice much of a difference here until you start navigating with this globe. Like for example, here I tilt it, and then you see the atmosphere, and I can turn the map like flying on a satellite over the Earth. And then you can switch back to the 2D view, and your location is maintained. Yeah, in this map, you didn't see vector data yet, but that's also something that's going to be available in this Season Wrap Library. So it will also be possible to display vector data. And we do have 3D coordinates in OpenLiAS 3 already. So when you pull in a file that has C coordinate, then you will also be able to display the third dimension in this Season Globe viewer. All right, so we've seen some of the flexibility of the layer working with raster data sources, vector data sources. Everything we've been looking at, with the exception of the cesium example, has been using the Canvas renderer in OpenLiAS. We have three different renders, a DOM renderer, Canvas renderer, and WebGL renderer. And those provide different functionality and different capabilities. The Canvas renderer renders everything to a single Canvas in the end. So you have one map Canvas. If people went to Eric's talk, they heard him say you can have hundreds of vector layers. All those vectors are drawn to one map Canvas. Each of the tile layers is rendered to an intermediate Canvas, which is then repainted onto the final Canvas, or redrawn onto the final Canvas. The DOM renderer lets the browser, the DOM, do composition. So each layer gets its own element. And we don't have a vector renderer, vector layer renderer for the DOM renderer yet. But I think we're just going to go forward with using Canvas and then have somebody potentially re-implement it with VML if there is interested in old IE support. But the DOM renderer would work with vector layers, and it would give you one different element for each layer. So using Canvas, the point I wanted to get to, was using Canvas is great because you can do things like work with the compose events that are fired on the layer or on the map. So after each layer is composed, so this would be a tile based layer, when it is done drawing tiles on its Canvas, it fires a compose event. If you're using the Canvas renderer, you can get access to that drawing context. So this event is fired, and my listener says, OK, I want to get the actual Canvas. With the context, I want to get the image data, and now I can do whatever I want with that image data. So what are some things that you can do? There's also a pre-compose event, which I'll show the use of in this example. One thing is to display two layers, and then with every mouse move, trigger a re-rendering. And what I'm doing in the pre-compose event is drawing a circular path and setting a clip on the Canvas. So the top layer is clipped to the circle around my mouse, and the bottom layer is drawn as normal. And so you get this little spyglass effect. You can come in here, and I can show you where my office is. There. Now you can all spy on me. So that is using these pre-compose and post-compose events. In addition, you can use these same events to do other things, like display a video on a map. So I just asked Dayan for permission. I could use this, and he said for sure. This is a Skybox video that I just basically copied. People saw the MapboxGL blog post using this video. I wanted to show how you could do the same thing in open layers. So in every post-compose event, I grab a frame from this video and display it on the Canvas. So I can pan around. It's a short video. It's looping here when it does that jerking. It's starting over. But you could see that you could use something like video tiles to display as a layer in open layers. And I intend to work this into an actual video source. So you could point to the URL for your video sources, and those could be pulled in as static videos, just like this one that's a single video. Or if you had a tiled video provider, those could be pulled in and tiled together on the map. OK. So there's a ton of functionality. It's a massive library. It's really like a mapping framework. And from the start, we chose to use the Closure Compiler. And we took on a lot of things with this commitment, basically. We decided to build a very large feature-filled library more than you'd ever use in a single application. And when you go to build a single application, it's then up to you to decide what parts of that you want. So if instead you're using a small library that, let's say, had a plug-in ecosystem around it, you might be pulling in those plug-ins by adding script tags to your markup, or you might be using some dependency management system to pull in those plug-ins. With open layers, when you want to pull in open layers, one way to do it is to specify this build configuration. So this configuration is describing what I want to be exported from the library. So OL map star, star is a wild card. So this gives me everything map related, all the map methods and properties. I want everything on the view that we've looked at. I'm pulling in a tile layer and an XYZ source, and then the vector layer and the geojason source. And then Andreas showed the overlay, and I might want that set position method. And then I'm going to compile this with advanced compilation, and there's a node-based build task that drives the compiler and produces this minified output for you. This is still way too high a bar, I know. And so what we need to do is to have a hosted build tool. One of the things we need to do is have a hosted build tool that will let people go online. One thing they could do is check, check, check, check, check, check what they do and don't use. Another thing we can do is provide a little monitor script that you can run with your application or while you're running tests, and it will actually detect what you're using, what are the methods you're using, and then it will provide you a link back to the build tool where you can get your build. Another thing we will do is provide custom profiles of the library. So if we decide that these are really common components, we will publish a build that suits a limited set of needs. But this gives you all the examples I showed you. They're bigger than leaflet, which we're a little disappointed by, but they're about 50K AG zip. So if you're looking at loading whatever 64 tiles that load in the first load, the library size starts to become less consequential. So. Yeah, you already heard. There were some references to future efforts. OpenL3 was released two weeks ago. So we invite all of you to look at our source code on GitHub. You might even want to become a contributor. In any case, you're free and invited to join our discussion on the mailing list. And with that, we'll just finish this and say thank you. Thanks for your interest, and thanks for coming here. There might be, did we run all the way out a couple minutes for questions, or if people want to do the transition? OK, three minutes. Clock's ticking. Both of us will also be around in the exhibition hall. Planet Labs has a booth there, as does Boundless. So you can come up and ask questions there as well. You stated you implemented the map projection. Does that scale through to the scale bar, the distance, the measure tool? So it uses the radius implied? OK. Yeah. So I think all developers make design decisions that they later on know they would have made differently if they had to do again. But a total rebuild sounds like a pretty drastic measure. What pushed you guys over the edge? Good question. OpenLayers 2 was really showing its age. It was built and released before IE7. And on the same day, I think, is JQuery 1 or something. And so it had a six or seven-year history with the same API, trying to still work, you know, maintain backwards compatibility and work on all those old browsers. And it really was just showing its age. And it still provides good functionality for people that use it, but we wanted to be able to take advantage of new technologies and push things forward.
|
OpenLayers 3 is here! Now it's time to dive in and get mapping. Join us for an overview of OL3 from a user's perspective. We'll cover common use cases and cool features of the library you might not have heard about. Our goal in this presentation is to get you comfortable with the OpenLayers 3 style of mapping - providing an introduction to raster and vector basics, discussing tips for integration with other JavaScript libraries, and exposing you to the build tools so you can choose just the functionality you need for your mapping application.
|
10.5446/31638 (DOI)
|
Thanks everyone for coming. So I am Dane Springmeyer and I work for Mapbox and I'm gonna be talking today about vector tiles for fast custom maps. And so Mapbox is based in Washington, DC and now we have San Francisco offices. We're about just over 50 staff at this point. So where are we headed today? Well basically, I want to dive deep into what vector tiles are. My hope is that in a few years vector tiles will be so adopted that we won't really need to talk about the details anymore. They'll just work, but since there's still a new idea or concept to a lot of folks, I want to dive deep into how they work. And before I start, I'm going to be talking about what I'm familiar with in terms of the tools and the goals of vector tiles for me. But you know, if you've ever worked with vector data in the browser, you've really worked with vector tiles. I mean dating back to the early WFS implementations, it's really vector tiles. So there's a lot of precedent here and people have been doing this for a long time. But what I'm going to talk about is a Mapbox effort to work on a spec and a set of libraries and tools that are very high performance. Okay, so what is a vector tile? Well simply speaking, it is like an image tile, which we're all familiar with. It's easy to cache and serve rapidly. So it's divided data. It's the same addressing scheme as image tiles as well. The zxy.png means the same thing as a zxy.vector pbf, which is the format that we're working on. So you can address it at zero, zero, zero, or at higher zoom level. It's the same way to request a tile. And vector tiles can represent many complex data layers, just like an image tile can. You can bake a lot of information into it. And where we're going with this is we want a full stack to work with vector tiles that's completely open source. So there's no proprietary components. There's no piece that you can't switch out for another piece if it works better for you. So they should be just as common and easy and open source to work with as an image tile. But of course vector tiles should be better. They can contain all sorts of raw vector data that you can do interesting things with. Geometries, road names, area types, building heights. They should contain highly packed data that's efficient for rendering, but they should contain attributes that you can do interesting things with. And yeah, they're very compact. So the format that we're working on easily fits on a USB thumb drive. So it's 20 to 30 gigs, and that's all of the OpenStreetMap planet baked into vector tiles. They should be very fast to parse, both client side in a language like JavaScript or server side if you're doing work with them. And so our client side languages choice is JavaScript, of course, and we work in C++ mostly on the server side. So they should be fast to parse. And a key to being fast to parse is you should be able to parse them lazily or incrementally. You shouldn't have to read the whole thing just to get, say, one layout or one feature or one geometry. You should be able to see just right into where you need and get what you want. You should only pay for what you get, which is a slight difference from, you know, textual formats like GeoJSON or TopoJSON, which work fabulously as vector tile formats. But we've chosen a binary protobuf-based format largely so that we can do lazy or incremental parsing, because that scales better in our experience. So overall, vector tiles, I feel like, offer a very bright future for customized, radically customized and efficient rendering of data as large as OpenStreetMap or larger, including raster data. So then, how do vector tiles actually work? Well, you can put any kind of vector or raster data in. These are the formats that the Mapbox tools are prioritizing support for. But it gets really interesting, right? You can put any types of data in. You don't have a fixed schema. So I like to think of vector tiles as ravioli, right? Because you can put anything inside, but they're really Chef's choice ravioli. The cartographer that designs them gets to choose what ships with the product. So you can choose whatever flavor you want, right? So I imagine a near future where many people are publishing vector tiles and they contain their own specific ingredients, right? So if you want to publish blueberry vector tiles, you can. If you want to publish beets, you can. If you want to throw eggs in your vector tiles, that's up to you, right? All that's common is the structure of the data, right? So the way it works is we have layers, one or many. Each layer has a feature. Features can have IDs. Each feature has an set of attributes, key value pairs, and geometries. Pretty simple. Layers are ordered and named is how you reference them. So you might have a vector tile with parks, roads, and places, and you decide on the names. Features are dictionary coded. So what that basically means is we don't store a key more than once in the whole tile. So if you have an attribute named name, we turn it into integer, integer, and then store that once in the tile for efficiency. And then geometries are single flat array. We found that if we compress, say, multi geometries or multi points into a single array, it stores much more efficiently. So that makes it a little tricky then to unpack back into, say, nested arrays of GeoJSON for multi geometries, for example. But we feel like it's worth it. That flat array stores very, very well. Okay, so yeah, how do geometries really work? We support all geometry types, points, lines, and polygons, multi or single. We store coordinates as integers because that's more efficient. And then those integers are delta encoded and zigzag encoded. So what does that mean? Well, delta encoding basically is storing differences rather than the actual value. So a simplified case would be if we have a line string of coordinates that looks like this, we would store the original coordinate for the first x, y, and then from then on out, we would store the differences from that first coordinate. So this is a dream case, right, where all of the rest of your coordinates would be ones, which obviously compresses really, really well if you have repeated values. But in the real world, it still compresses better delta encoding because the numbers are smaller. It takes up less space. And then zigzag encoding on top of that also helps save space. So the idea behind zigzag coding is we do want to have negative values in our vector tiles because you might have data that's buffered outside of the vector. This is important often for labeling. Or at render time, if you want to blur geometries, you need to have a little bit of a buffer. Okay, so we need negative coordinates in the vector tiles. But we don't want to pay the price for that. And the trick is zigzag encoding, which is basically a method of turning sign integers all into unsigned integers. So you only store positive number. And then when you decode it, it's easy to go back and get the original negative number. Okay, so what can you do with vector tiles? You can over zoom them. This is the whole idea, right? You should be able to ship a tile set and not burn it all the way down to say Zoom 17 or Zoom 20. But you should be able to ship a tile set to maybe only Zoom 14 and still allow visible display of those tiles to much deeper zoom levels. So this works around the problem of trying to pre-cache tiles for the whole world. You should be able to do that. You should be able to render Zoom 14 for a fairly big data set very quickly and later on view it at deeper zoom levels. You should be able to also composite them, right? Because once you've burned vector tiles, for some layers, you should be able to combine it with other ones. So that's easy to do with this format. Now, you might be wondering, well, how do you combine vector tiles at different zoom levels? Well, it's very easy, obviously, at the same zoom level. With protocol buffers, the format we're using, you can just concatenate them. So it's really quick. But at different zoom levels, you're going to need to clip some out and re-render them before doing some. So you should be able to do something like this. Say you have three different vector tile sets from three different organizations or three different cartographers, parks, roads, and points of interest. They make sense to pre-bake it to a different max zoom level, right? Points of interest, you might want to have highly resolved, whereas parks that are just square polygons don't need to be as resolved. So the idea is you should have different tile sets that exist at different max zooms, but you should still be able to render them all together and benefit from the highest resolution of any of those tile sets. Okay, so there's also this concept of sources that we're using to describe when you have a bunch of vector tiles together that represent one layer. So that could be a bunch of places in an MB tiles format, or it could be URL to vector tiles that you can address by ZXY, or a tile JSON file that describes somewhere else to look online. Okay, so for more technical details, this is where you can head. GitHub, 4.0 Matbox, 4.0 vector tile spec, this is where the emerging specification is, and it's definitely still at the emerging stage. We're looking for feedback and use cases. What we've done is come up with a solution and format that works for us, publicizing it, and then hoping for feedback and to learn how many other use cases it works for. We're not trying to create a spec for all use cases. We're collecting on the Wiki at that site various implementations in different languages. So there's quite a number of invitations in JavaScript, some pure JavaScript, some Node.js, some in C++, some emerging implementations in Python, and I'm sure more coming in the future. So yeah, check out the spec. We're still at 1.0. We haven't incremented pass end recently. Because there's been lots of work to optimize the tools around vector tiles. The format and the structure hasn't been a bottleneck yet, where we're still working on the tools around them. So you might be asking, okay, does it sound interesting? How can I create them? Okay, so there's really two key workflows. There's a variety of command line tools that are available. We maintain one of them called TileLive, which is a Node.js toolkit that can be used on the command line, or you can use to build larger applications. There's Tessera, which is a module written by Seth Fitzsimmons from Stamen Design, which he is talking about right now in another room. Unfortunately, we actually thought maybe we should try to get everyone in the same room. We could just have a coup, but when I saw the size, I knew that wasn't feasible. Okay, there's also a library called Avocado from MapQuest, which was released a few weeks ago, which is C++ based. And it's for creation of vector tiles, I think only. There's obviously a variety of tools now to also render them and work with them, but I'm focusing on the creation stage. And then there's a product called Mapbox Studio, which is what I've been working on for almost two years now. It's been in development and actually was launched officially today on the Mapbox blog, so I encourage you to check out Mapbox.com's blog. It is formerly known as TileMill 2, so the reason we renamed it is it's just purely based off of vector tiles, which introduces new concepts and new performance abilities. So we felt like it would be reasonable to think of the tool as something separate from TileMill. We do plan to do one final release of TileMill in the future for those of you that are depending on it and need some time until you can upgrade to Mapbox Studio. So the tool will be supported in parallel for a period of time. But yeah, Mapbox Studio has been in development for a while, officially released today. So it has a visual UI to creating vector tiles. It also has a styling interface to make maps from vector tiles, which you'll probably hear about and the National Park Service gave a talk about earlier that some of you may have seen. But what I want to focus on is the slightly more hidden part of Mapbox Studio, which is how you create them and it allows you to create them locally with local data and create and export locally without any Mapbox.com services. So what does that look like? It's a pretty simple interface because we strove to make it without as many options as possible. So this would be an example of the interface. The idea is you can load in multiple layers and this would be a water polygon from OpenStreetMap. You can set the projection and the minimum zoom and the maximum zoom that you're going to render it out to. And then the buffer size. Like I mentioned before, for cases like labeling and blurring effects, buffers are very critical. You can rename some of the metadata in terms of the attributes that are included in this in vector tiles because it's imagined that you would really curate the data and then prepare it for a different audience of cartographers that are going to render it. The vision here is those that know the most about a data set and its metadata would be the producers of the vector tiles. They would really prepare it for others. Okay, then you can export it from Studio. And just as an example, this data set is 530 megabytes as it starts as a shape file. I did a Z to 0 to 10 export which for the resolution of the data is more than enough to zoom levels. So at zoom 10, you can continue zooming to 12, 13, 14. And the only point where it gets a little bit blocky is where the original data gets blocky. If I only did an export at zoom 0 to 2, then you would not be able to zoom that deep. You would start to see the quant effects of the vector tile encoding, the optimizations. But zoom 10 for this data set was more than enough to capture the original resolution of the data. It finished in eight minutes on my MacBook and it's about 1 million tiles on the resulting vector tile set inside of an Eskilite database is much smaller than the original file. So that's really because of the integer encoding, delta encoding, and zigzag encoding. In this case, I really feel like we're capturing the original resolution of the geometry. So what can you do with that? Well, you can obviously upload it to mapbox.com just like you could in MB tiles of image tiles, like you can in the past. And then you can see it online and then you can bring it back into tile mal2, or excuse me, Mapbox Studio to then style. And this would be an example of the styling interface. It's very, very familiar for those of you that have used tile mal in the past. It should be familiar. It should be exactly the same except what's happening under the hood is your data is tiles, so it's extremely fast to access. But you can also do a headless export. I mentioned tile live is the tool that we maintain that Mapbox Studio is built upon, but you can download it. It's free, 100% open source, and you can use it headless without installing Mapbox Studio GUI on the command line. And it's a Node.js application, and its dependencies are the Node.js module for Mapnic. So if you were to run npm install Mapnic, MB tiles, tile live, and tile live bridge, you would have all of the dependencies in a matter of minutes to run this script. And then the way that the script works is you run the tile live copy command, you pass it on Mapnic XML, which is that data.xml, a bounding box, and then it dumps the tiles to an MB tiles file. The one trick or gotcha that you'll see if you try this is that data.xml is created by Mapbox Studio, and it has a little bit of metadata in it so that we know basically what attributes to put in the vector tiles and a few other things. So it's not a raw Mapnic spreadsheet. It's a little bit of extra metadata that Mapbox Studio has added, but it's not too hard to author that yourself. So you could get set up for this export outside of Mapbox Studio. And yeah, then you've got an export and MB tiles. So the whole idea here is that this workflow should scale to any large data sets that are complex and have a lot of potential layers you could represent, but the potential attributes. So this is an example of the x-ray style applied to the vector tiles of OpenStreetMap. And the x-ray style is basically a default style that you get when you upload an MB tiles to Mapbox.com, and also a default style that is embedded in a variety of the client tools that can read MB tiles. Basically just renders in bright colors the various geometries and labels. So you can kind of get a sense of what's in there. So that was New York, and this is Washington, D.C. And then the other thing that is also on GitHub and OpenSource, and there's a variety that were added this week as part of the launch, is sample styles. So there's an OSM bright.tileMinal2 that you can use to get started styling things very quickly. And then I just want to show you Mapbox Studio real quick, and I'll show you some of the starter styles. So if I were to go to projects here, styles, new style, right now in the latest release as of last night, I have four starter styles. Oh, there we go. Thank you, Andrew. So I have four starter styles. I realize it's a little cutoff. A basic OSM bright is actually built-in Mapbox Outdoors and then satellite, what is that called? Satellite afternoon. Yeah, so let's see if I can figure out how to resize this. Here we go. I wanted to show you satellite afternoon. It is one of the most complex because it combines three vector tile sets, a vector tile set of raster imagery, satellite imagery, a vector tile set of hill shades and topolines, and then also the OSM data curated in a vector tile set that we call Mapbox Streets. So I'll just demo that real quickly. I think we should zoom in to see if I can go full screen here. Zoom in to Mount Hood to give you a sense of how cool this is. And if I get back to full screen, so the whole idea here is not prepackaged maps, but that you can change absolutely everything about this style, including the compositing operations about how the different layers interact. And just on a technical note, you might be wondering, okay, so he just said that there's all these vector tile sets that are actually being combined. So what's happening is Mapbox.com is fielding your request for what vector tile sets you want. MapNIC on the service side is concatenating the protocol buffers and then shipping back to the client, in this case my desktop app, one message in a protobuf format. And we can iterate through and see what layers are in there and then match up to the styles here. So basically what that means is if I misspell one of these styles, it gracefully degrades. Basically there's just not a match between the style name and the layer name. So let's see. One of the big tricks here in the style is this image filters. We're doing HSL transformations, so I can comment this out and get a look at what the raw satellite imagery looks like. Or I could go in and see that the hill shade is being applied with an overlay compositing operation, comment that out and see what effect it's having. Great. So the idea is these are starter styles for anything you can imagine doing. Okay, so I'm just about up with time, but I'm going to go back to my presentation and show you a few more slides. So the vision here is that vector tiles are going to make possible a new generation of renders that don't have to worry as much about how big the data is as being shipped to them. And they can go crazy with styling effects. And that's kind of the whole idea, is to unleash a new generation of carelessness in terms of data size and fierceness about how beautiful and interesting interactive maps can be made. So we're working on, sorry that that's cut off, I think I screwed up though. Well, it's only a few slides. This is a Mapbox GL that is cut off. I may not be able to fix this in time. Anyway, so we're working on an application called Mapbox GL, which is 100% written in JavaScript and WebGL, which can render vector tiles. And if you haven't heard about it, go ahead and go to Mapbox GL on Mapbox.com and see what it's about. And also vector tiles, you should think of them as about more than just rendering. This is a toolkit called Vector Tile Query, also in GitHub, that basically allows you to, in a very quick API call, say, here's a source of vector tiles and here's a line that I'd like to query them along. And here's the attributes I'd like to pull out. So what you can do with that is things like this. If you have a vector tile set of elevation and a vector tile set of temperature, you could do a line profile and get back the results. So I'll just demo that real quick before I break. So the cool thing about this is that, I should probably put that on the screen for you guys, because you can query now at any zoom level or any, and we automatically be a back data that's at the relevant resolution. So you can do a query across the whole U.S. Or, of course, you could zoom in to the Portland area and say, what is the temperature and elevation range from Portland to Mount Hood? Whoops. Yeah, the display obviously is a little bit messed up with the size of the font, but you get the idea. So I definitely encourage you to check that out. This is very new and experimental, but has a lot of promise. Okay, real quick, future work. There's a severe lack of documentation for tinkers. I'm going to be working on that in the future. So if you're a developer that's a little bit confused, that's an opportunity for me to answer your questions and encourage you in the right directions to tinker, hack, and pull apart how this actually works so you can build back up applications on top of it. So that's a future that I want to work on, as well as more new use cases beyond rendering and querying. We have some work to do around geometry robustness. The polyons are not, we haven't worried about them being OGC simple spec, so we have some work to do there, tools to clean up and validate. And then, of course, there's big tool sets out there that would be interesting to explore, maybe support. There's a rumor I heard today that maybe the post-GS developers are considering support for vector tiles. Maybe in this format, maybe in another, I think either would be fantastic to entertain and maybe tool kits like OpenLayers. I know Leaflet is considering support as well. So thank you very much.
|
Vector tiles are becoming a common solution for fast clientside rendering of spatial data in both browsers and mobile devices. With the recent release of TileMill 2 Mapbox has made it easier to design and render vector tiles. This talk will cover the open source technology under the hood in TileMill 2 as well as other available tools. Also discussed will be the status of an emerging specification for vector tiles and recent advances in the format.
|
10.5446/31639 (DOI)
|
Oh now, more on vector tiles. No disadvantage as the MyPox ones. Actually, we aim to produce, to develop something which was pre-diagnostic in terms of client and pre-informance, and work well with our current infrastructure. By the way, I'm working at the University of Melbourne at the ORIM project, and me and my colleague Andrew Bromage developed this server called tiles. If you're on GitHub, so you go to GitHub, ORIM, tiles is there, with the Z at the end is there. There's been open source last week. So you can go and do whatever you like with it because the license is Apache 2. This is based on Node.js and uses both CouchDB and PostgreSQL. So, but pre- first of all, let's start with the issue we're facing. We faced. The ORIM project wanted to give our users a very rich user experience, a very good one. Which means that we wanted to have vector data, especially polygons, on the client. So, which meant having a large amount of data. So, and we, of course, we use the raster data only as a background, but for the, say, we wanted some high degree of interactivity. So, vector data were the only way to go. We wanted to have highlights, tool tips, that kind of stuff, or coroplets on the fly. We actually allow our user to upload their data, join with the geography data we have. So, polygons at the level of, say, state, county level, that kind of stuff, right? Statistical divisions, whatever, basically the data are joined. And then from, so the user uploads some data, those data are joined. And then on the client, the user can choose the visualization to apply. So, we were aiming to this level of interactivity. This is the level of freedom and the richness of user experience. So, of course, there were a few pitfalls along the way. So, basically, okay, of course we are, all our data that I will be showing are related to Australia. But basically, one of our requirements was to show a coroplet map like this, visualizing for the whole Australia, like two plus, two thousand plus polygons. And, as I said, vector features. So, basically, the user should have been able to change the color of the mapping, so change the intervals, change the number of classes, change the colors, highlight, click on something, adding something back, that kind of stuff, right? So, and of course, we couldn't allow all this information to be transferred to the browser. There was another requirement. For instance, we had a layer which was composed by more than 60,000 polygons, basically 90 megabytes of data. And the idea was to show all of them at state level because it wouldn't be, it's country level because it wouldn't be possible, but still to let the user interact with those kind of data. So, the, now, we all know what the rest of the types are. So, of course, they are useful because they, mainly because data can be cached in the browser and in the cache server. This is one big advantage of rest of the types. The other big advantage is that they can be pre-computed. So, this is for rest of the types. For vector types, it is, sorry, there is something missing here. Oh, it was supposed to be something, an image there. Damn it. This is upsetting. I tried to empower points and they clearly didn't work. I didn't check all of the images. All right, all right, all right, all right. Okay. So, basically, what's the issue with vector types? Is that especially with polygons, I mean, with line data, it's not like straight data, it's not really an issue to cut features into, into tiles. With polygon data, it's more of an issue because those polygons, you don't want to lead to user to see all those split polygons. So, you need a way to recombine them somehow so that the user can see a polygon all so it doesn't look at the individual tiles. So, what we aimed at was something on the client to reconstruct those polygons. Basically, the idea is that we have a lot of processing power on the client, so we can do a two-step thing. First, you load the vector types and then from the vector types, recompute the original polygons on the fly in the client because we have powerful clients. So, another issue that we were facing is that there's redundancy on data because if we, oh, by the way, we were talking about JSON data. So, we started with GeoJSON. The problem with GeoJSON is, of course, it's not the topological format. So, shared arcs between polygons are repeated, which means that data, say, between any better, between 50% or 70% larger, depending on your data, of course. So, we looked into GeoJSON as well. Damn it, really, without... Okay, so, okay, this is an example of split polygons, and of course, we don't want to have this effect. So, as you may see, Australia is a continent, and unfortunately, there's a lot of desert areas with a lot of... from a geographic point of view, it has a lot of small islands, for instance, which need to be visualized, but of course, are heavy in terms of data size. So, simplify... Another step that we had to take is to simplify geometries. Now, simplifying geometries, we use the Wigital's Douglas-Pekker algorithm, and we did this behind the scenes, of course, beforehand, using Postgrease. So, this is one of our examples. So, you may see that the lower zoom level, that polygon, that peninsula over there has been simplified, so it has fewer points. Entered over to GeoJSON. So, with TopoJSON, you don't have to repeat arcs which are already there, which are shared by two polygons. So, this was another advantage of proper processing. There are a few standards to servitize, and a few protocols, if you like. We chose the TMS one, which is the one which is basically Beakwood's now. So, a little bit more about the architecture. We use both CouchDB and Postgrease. Why? Postgrease is used to generate data the first time, and then those styles, the data, are put into CouchDB, which is used as a caching mechanism, as a cache server. Now, of course, it could be another level of cache server, but we didn't implement it so squid or whatever. We didn't use that, though. And, of course, there is a cache on the browser. So, basically, whenever there is a vector tile, first is the browser looks into its own internal cache. If it's not there, it has the that particular vector tile to the two tiles, which is the server. And tiles looks for it in CouchDB. If it is there, it serves directly to the browser. If it's not there, it calls Postgrease to generate that tile. Now, a little bit about configuring tiles. We needed a fair degree of flexibility because we wanted to serve all JSON and top of JSON. We wanted to have different layers because data are generalized. So, we store every, not every zoom level, but every set of zoom levels in a separate table. So, you could have an ungeneralized version of data in one table, and that's associated with the highest zoom level, say, between 15 and 20. You can have a more generalized, but not terapy generalized, dataset when storing a table to serve zoom levels between, say, 15 and 13 and so on. So, we need something flexible in order for tiles to retrieve data from different tables. So, we chose these, and of course, we could have many different layers. So, basically, there is a JSON to specify all the, how to produce those tiles, and in the property file. The property file is used to just give system information like where database, where the couch database is, user name, password, that kind of stuff. So, this is a fragment from the configuration file. So, basically, there are, of course, layers. One layer can be split in a different, in a range of different zoom levels. And basically, every set of, every range of zoom levels is independent. So, you can specify a different post-gis table. You can add an additional expression if you like. So, you can have the same table, but behaving in a different manner because it's some, say, some polygons or arcs or whatever are selected at a zoom level and not an other one. You can add, of course, more than one column, sorry, additional columns. So, other than the geometry and the ID, of course, the geometry and the ID must be there because you use the ID of polygons to reconstruct the polygons on the client. They probably specify the full projection and the location of the couch database, the kind of stuff. Now, the images are missing something. So, again, it was supposed to show an image, but it doesn't. Okay. Sorry about that. The way, in tiles as an API, of course, in addition to implementing the standard TMS protocol, which is, of course, is an HTTP GET verb, it uses delete and post. The post is used to populate precede a layer. So, you basically specify the name of the layer, the format could be J-O-J zone or Topo-J zone, and then a Minimax zoom level. So, via HTTP, you can just populate the cache in this way. It's prep-populated for you. With delete, of course, you can delete a layer. And if you just want to have the list of layers, the number of tiles, the weight in terms of the layer and the kind of stuff you just should get with the layers, without specifying the layer, pretty intuitive, I would say. So, another word about generating Topo-J zone. We wanted to generate both J-O-J zone and Topo-J zone. Now, Topo-J zone is implemented in Post-GIS and J-O-J zone as well. But while generating J-O-J zone in Post-GIS is simple, Topo-J zone, not much so. Because actually, you need to create a temporary table to store the data, even to generate one single Topo-J zone tile. So, it's not terribly easy. Moreover, it's terribly slow. It is low. It has a reason, because it's very dory. Very, very dory. And of course, Post-GIS is meant to be the link with massive amount of data. But these, of course, created a problem with us. So, what we did is to use the JavaScript, Topo-J zone's JavaScript library, true.js. And that was very fast. A generation of Topo-J, basically, it is a two-step process. We take J-O-J zone out of Post-GIS, then we transform into Topo-J zone, then we store in CouchDB. It was fast, but doesn't work with massive amount of data. It's okay if your data are not that massive, because it keeps everything in memory, this is why the reason why it's fast. Another thing is that it's not as dory as the Post-GIS topology, which means that you cannot be catching all the topological issues and errors. So, we had actually my colleague Andrew Bromage spend a good deal of time trying to work out all the topological errors in Post-GIS first. But then we use these, the Topo-J zone's library to generate those data. Sorry for the missing images. Okay, so for the clients, what we did is to develop a leaflet client, pre-stored forward for both J-O-J zone and Topo-J zone. Topo-J zone is very simple. For Topo-J zone, what we had to add is the just reassembling, the merging of the polygons in the client. We did it as well with OpenLayers 2, which was tricky and still under development, to be honest. The notable thing about this is if you want to merge the polygons, merge the single tiles in the original polygons on the client, there are some tricky things, because tiles are loaded synchronously, of course. And you cannot be sure that everything has been loaded. So basically we use some timing. There is an initial timing to start the merging process. And every time a new tile is loaded, there is a further delay to start a new merging process once hopefully all the tiles are being collected. So far, I mean, it is smooth, so I don't see any trouble with this approach. But to be honest, I haven't tried with this low connection. We tried with 512 kilobyte connection simulated one, rather smoothly. For lower connection speeds, I'm not quite sure. It could be that it's a little bit sketchy because probably there might be holes of tiles which are not loaded yet, and then once it's loaded, the merge process is re-triggered again. So it may not be as smooth as I would like to, but still I haven't tried that. So OK, with Leiflet it's simple enough. That's how to add the Torval JSON layer. Yeah, this is what I explained right now. These are the few references, and that's about it, I think. Questions?
|
This talk will introduce the Tilez project, which provides aNode.js-based realisation of a Tile Map Service tiles in both GeoJSON andTopoJSON formats. This formats provide a seamless and highly performant usermapping experience in both OpenLayers and Leaflet.The key to fast display of vector geometries in Tilezz lies in the use oftiles, which leverage both local and server-side caching. Whilst linear features lend themselves easily to tiling, polygons have traditionally represented more of a challenge.Tilez provides further efficiencies by using TopoJSON as a transport formatbetween the server and the client. Tilez implements all these improvements to support web-based vector tiling, delivering good performance under heavy load through Node,js and CouchDB-based caching, and efficient transport through TopoJSON. This talk will cover Tilez and the practical aspects of its implementation together with use cases from the Australian Urban Research Infrastructure Network (AURIN - www.aurin.org.au).
|
10.5446/31640 (DOI)
|
Thank you for attending this talk. My name is Eric Lomon. I work for Camp2Camp, which is a company located in Switzerland and France. I'm from France, so from the French office of Camp2Camp. So today I'm going to talk about the OpenLayers3 library, which is the new version of OpenLayers. And I want to explain what makes it unique. So this is going to be the subject of this talk. First of all, I would like to mention that we have released the final version. OpenLayers3 is now released. It was two weeks ago, I think. So it's a major thing for the library and the project in general. We've been working on OpenLayers3 for two years. It was funded by a collaborative group of companies. So it's really great to see it out now. So in this talk, I'm going to present some of the techniques we use internally in the library and what makes this library unique. I'm not going to talk about the API or the way you use OpenLayers. So if you want to know more about the API and how to use it in general, you can go to Teams and under S's talk, which is at three o'clock in this room after the break, I think. So I'm going to start with an example, a demo. So this is a typical OpenLayers map. And this map, we have three, we used three different rendering technologies in the library. We use the DOM, we use Canvas, and we use WebGL. And this map here is rendered with Canvas, which is currently our main rendering technology. So in this map, there are two layers. There is a background layer, which is a tile layer, and it's actually a Bing Maps layer. And on top of it, there is a vector layer. What's in the vector layer is you can see these labels, white and black, with the name of the countries. There is a marker, a green marker on Jamaica. So this is also in the rendered as a vector, in the vector layer. And you can also see the boundaries of the countries, the blue lines, and those are actually polygons. And what I want to show you, one feature of OpenLayers is that you can rotate the map. So I can rotate the map left and right here. And as you can see, what's interesting to note here is that the labels and the markers stay horizontal. They don't rotate with the map. So this is an optional behavior, but that's something you can do with OpenLayers. Another thing I want to, another feature I want to show is that I can animate the map. So we have this animation framework. So you can use the animation, you can animate your map in many different ways. So here, what I'm doing is a fly-to animation from Jamaica to Guatemala. And again, what's interesting to note is that as I animate the map, there is no stretching of the lines. The labels stay at the same size during the animation and the same for the marker. So the question now is, how do we do that? How can we, how do we do to not to stretch the lines, not to stretch the labels during the Zoom animation? So what we do is we draw very often. Vector layers are drawn very, very, very often. So this is to be able to get good rendering quality, good image quality. So we, basically, we render the vectors at each frame of the animation and during animation and interaction. So each frame means at each step of the animation, we redraw the entire scene. And to be able to get smooth animations, it is known that you need to redraw between 30 frames, between 30 and 60 frames per second. So this means redrawing vectors at this rate to be able to get good rendering quality and good performance. So we really have a performance challenge here because we want to redraw very often and at 60 frames per second. So now, and this is the goal of my talk, I'm going to, we're going to look at the techniques we use in the library to be able to achieve that. So the first technique we use, technique number one, is batching. So we try to minimize data processing and manipulation in the library. So while interacting and while animating the map, we want to reuse the data as much as possible instead of calculating, calculating, calculating. So we cache and batch the calculations of the style, the geometry simplification, the archery lookups that we need to get the features out of the source and the creation of objects in general. So we create a batch with all the data we need and then we can reuse and replay that batch as needed. And actually what we do is we replay the same batch when animating and interacting with the map. So that's the first technique we use. Another thing we do is geometry simplification. So you already know about that probably. But what we do here is before rendering, before drawing, we simplify the geometries. So what we draw is only simplified geometries. So we use the Douglas famous Douglas Parker algorithm for lines and we use some quantization algorithm for polygons. And this is to be able to maintain the relationship between adjacent geometries. And another thing we use a lot is what I call over simplification. So if you have a very complex, a very complex geometry and if that geometry spans more than your map viewport, so in that case we can oversimplify as shown in this figure, we can oversimplify the parts of the geometry that are outside the viewport. So in this example here I have a complex geometry but everything that is outside the viewport is drawn as a very simple geometry. And we do that also to be able to maintain the topology of the geometry. So I'm going to show an example of this. So this is a fractal. So right now the fractal has 3,000 points. I can increase that. Oh, sorry, the other way. So this is a fractal with 13 points and I can increase that here. I'm going to increase it to about 800,000 points. And as you can see, I can still interact with the map without any problem. And this is because we simplify the geometry. So we don't draw as many points. We don't draw 8,000, 800,000 points in that case. If I zoom the map now, I see, as I zoom the map, I see more and more details of the geometry. But now, the oversimplification algorithm triggers because everything that's outside the viewport is simplified a lot with just a few vertices. So in this example, you can see both techniques in action. Another thing that is rather unique is the way we render vectors. We don't have an intermediate canvas for vectors, which means that vector geometries are drawn directly to the output canvas, which is the canvas you see on the screen, which means that we only pay for the pixel we draw. We have, and that we can avoid composing transparent pixels. So the good consequence of that and the good property of that is that with open layers, vector layers are very, very cheap. You can have hundreds of vector layers and there is no per-layer cost, basically. So that was the performance techniques that we used. Now, I'd like to talk about another technique, another feature, is feature hit detection. How do we detect a feature on the map? So I'm going to start with an example. So this is a map, again, with a background layer and on top of it, there is a KML layer. You can maybe zoom a bit. And as I move the mouse, I can detect the features and here I display a very simple pop-up with the name of the feature. So as you can see, this works very well. This is very precise. Oops. And very, very fast. So how does it work? So the one thing to know first is that Canvas itself does not natively support hit detection. There is nothing. With Canvas, you just draw pixels and that's it. You cannot know. With SVG, you can register listeners on your elements, but this is not the same with Canvas. So the technique we use here is that, so we have this batch of features, as I explained before, and we, so that's our scene. With all the features we have on the screen. So we redraw the entire scene in a one by one pixel Canvas. And then we test if there is a color. So we draw feature by feature and for each feature, we test if there is a color. And if there is a color, there is a hit. So the advantages of that mechanism is that we can also detect features that are under other features. So we can detect everything. It's pixel perfect because we render every feature the same way they are rendered in the output Canvas. And we don't support that yet, but later we can introduce a tolerance. So we can, we are able to detect a line that are one pixel thin. We can detect them easily and which is important on touch devices, for example. So now I'd like to talk about WebGL. So as I said, there are three renderers in OpenLayers, DOM, Canvas, which is the one I talked about before, and there is also WebGL. So a few words about WebGL. So WebGL is now everywhere. We talk about it more and more. It used to be absent from iOS and it will be in iOS 8, which I think is just around the corner. It should come out in two weeks or something. So which means that WebGL is supported by every major browser now. And the important thing about WebGL is that it allows to do things that are not otherwise possible. So now I'm going to talk about the current status of this WebGL renderer in OpenLayers 3. So right now it's a bit limited because we only support, we only have support for tile and image layers, but not for vectors. You can do some basic image effects like changing the use, saturation, contrast. This is already supported, but vectors are not yet supported, but this is the main thing we want to work on. So these are the perspectives for the project. We want to push WebGL, the WebGL renderer forward. We want to work on it in the future. That's our main thing. So we want to add vector support to it. And we also want to be able to do tilted and perspective views. And to conclude this talk and to finish, I'm going to show you an example of this tilted perspective view. So this is just a prototype right now, but this is to show you that this is something we're taking seriously and we are currently working on. So this is again a regular map with a big maps and I can tilt the map so I have a perspective view which later will allow us to view things in 3D. So for example, if I have a building, we'll be able to see the building in 3D. So that's we have in mind for the future of OpenNAS. This is the end of my presentation. Thank you. Any question? Yes? Hi. I was wondering, I noticed you had labels placed dynamically on features earlier in your talk. And I was wondering if the positioning of those labels, are those pre-generated or is that figured out on the fly based off of centroids of the polygons? Yeah, it's based on the entire point of the geometry dynamically. So you mentioned the vectors get simplified. So do they get simplified on every extent change? Or recalculated on every extent change? So each time we, I would say every time you zoom, they are recalculated. But not during the zoom animation. So we use the same geometries, simplified geometry while animating. But at the end of the animation, then we re-simplify it. When should I start getting worried about the size of data that it's simplifying? Or is it going to make it slow? Simplification? Yeah. If it's happening on every extent change? It's not happening on every extent change, first of all. And we try to make it fast for you. So you shouldn't have to worry about it. If it doesn't work, then we have a problem on our side. And it's, you don't have to care about it. I mean, you just provide your geometries and the simplification is done for you just before rendering. But the geometry in the source still is not simplified. It's only for the rendering. Okay. And how many vectors or vertices can I draw that good performance? So currently, I think I can say that we support tens of thousands points vertices, something like that with the Canvas render. So currently, we do not support vector, but we have made prototype and with one million points, no problem. Once the data is in the GPU, you're safe. Okay. A related question. So with the Canvas renderer, you manage thousands of points, right? Yes. All right. So then the bottleneck is getting the data down to client. Right. So for the moment in OPLAYS2, we use like Victor layers when there's not very much features and then we go for WMS layers when we need to present a lot of features. So I don't think the mic is. Oh, okay. You have to be really close. All right. So for the moment, we use WMS layers when there's a lot of features in OPLAYS2 points X. And so I think the question we both ask here is, or the question I have, do you think that we could stop doing that and just always use vector layers? Not always, no. Okay. But that's. So where is that break point? I mean, if we don't care about getting the data down to client, where's the break? And also, when you say it works for thousands of features in Canvas, is that in Explorer as well or yes, in good browsers? Yeah. But I think if you use IE, if you use recent versions of IE, IE 10 and IE 11, then it's okay. But it's a good Canvas implementation. Okay. And it's improving as we speak. Do you have any time frame for the vector layers in WebGL? We are going to work on it next month, but only for points for now. So we're going to hopefully have a point, a WebGL point implementation by the end of this year. This will come next. I don't have any time frame for the rest, but we are very interested in pushing that forward. It sounds like one of the limitations on the vector data now will be just uploading that to the client. And to that end, I imagine map tiles will continue to be used. And to get interactivity from the map tiles, I'm wondering when UTF Grid support will be offered. I don't know. This is on our list. We want to support UTF Grid in the future. But currently there is no one working on it. But we're obviously interested in having support for UTF Grid in the future. But I can't say when this will come right now. So that example you showed earlier with the vectors, was that using just WFS vectors or was that using vector tiles similar to what Mapbox provides now? No, no. This wasn't in the examples I showed. This was not vector tiles. Just I think this was one example was KML, just a KML file, KML document. And the other one was GeoJSON. So would it support something? We already have support for displaying vector tiles. And we have an example showing this. But this is something we still need to work on. Okay. Thank you. Thank you.
|
We've rewritten OpenLayers from the ground up with the goal of offering a powerful, high-performance library leveraging the latest in web technologies. This talk will present the latest advances of the library, focusing on aspects that make OpenLayers 3 stand out. OpenLayers 3, for example, uses technologies, techniques and algorithms that enable high-quality and high-performance vector rendering. Come learn about the optimizations and techniques OpenLayers 3 uses internally, and how you can leverage them in your next web-mapping applications.
|
10.5446/31641 (DOI)
|
And we'll talk about GIS things we do in the browser these days. And this is clearly not the model that we want anymore. You still see an unfortunately large number of sites like this when you go and look at government and county websites and other places like that that have data or just provide some kind of viewer for information. They've still stuck on a very old model. And a lot of that is also having to do with that so much of that stuff goes back to the server to do anything interesting at all. I mean, most of the stuff is just display. And I think we've moved beyond that. So this is less of a demonstration of what is something that you could just copy and paste and create yourself right now, but more of what's possible and what we should be moving towards. We've got nice modular CSS frameworks and UI frameworks that we don't need a whole solution anymore. You can put together little pieces you want and style it the way you want and make things that are individualized and do things that are specific for your users. So you don't even need to have a web server to look at shape files and that kind of stuff these days. I can't count the number of times that people ask me, well, I've got a shape file and I want to see it on an open layers map or some other kind of map. How do I do that? And for the longest time, I was like, well, you really have to have some kind of server and it's got to be converting that shape file into GeoJSON or something else like that and then serving it out. And so now I've got a nice little drag and drop tools that will let you just drag and drop stuff onto a map like this example from Calvin Metcalf and I can just drop a fairly large number of points onto here and then it sends those back to a worker, it takes the zipped shape file, unzips it and displays the points on your browser and of course, no, I'm doing it live, it's not working or it's working super slow. It was impressive earlier. Maybe we'll come back to that. I might regret doing mostly demos. All right, well, we'll come back to that. So what's powering a lot of this stuff is some newer APIs and they're really not new there but the support for them is now ubiquitous enough that unless you, for some reason, have to support IE8 or less and even with that there's polyfills and stuff like that. So for most of this, IE9 or IE10 will get you and then with all the other modern browsers, the things have been supported for a long time. So being able to draw with Canvas, which provides you a nice drawing surface that you can very quickly draw a lot of information on and it doesn't slow down your browser to SVG support. Again, you see that's IE9 and above and then most powering that other example, which maybe we can get back to. Is Web Workers, which those are a separate process that is in the background of your browser. So it'll spin up extra threads and do stuff on those threads and it doesn't have any real access to the DOM but it does provide a good way to do computation like intensive stuff without slowing down the browser. And the support for it is pretty widespread as long as like I said you're on IE10 or above. There's some variations and differences with those things which is a... You kind of have to deal with but... So like... Is that coming through? No. Let's see if we go... No? Hmm. Alright, well, I'm not able to scale this properly. Okay, well never mind. Anyway, that's the efficiency of passing data back and forth. So things are IE and those other ones are Safari, Firefox and Chrome with the lower being the better. You can do... How long it takes to do a thousand messages. Anyway, so they've got... We've got these things that can help us out and you can do computation like tens of stuff that with the browser still remaining available. Let's see if this one... I'm definitely regretting doing live demos because the three things that I wanted to demonstrate are not working now that I am in front of everybody. So one of the things that makes it easy to use, workers are really gives the workers a good advantage is these strongly typed arrays which again, they're IE10 and above and all the modern browsers. And so it's instead of just a normal JavaScript array which can be a list of any objects, it's like unsigned integers or buffers. So it's numbers. It's just numbers. That can be very quickly passed around by reference or transferred over to different threads and operated on very efficiently. So that kind of stuff really powers some of the new cool stuff you see in WebGL and allows you to do interesting stuff. So WebSQL database, it's never really caught on and that's a shame because it's a good way to store things and gave you a nice, you know, SQLite database in the browser. But we do have WebStorage which is a local storage thing and we do have IndexDB, which is one of these, which is got pretty ubiquitous support. The only thing that's really missing is Safari and that's got a good shim as well. And the next version of Safari which should be coming out very soon now, they made the announcement at Apple thing, is we're going to have full support for it as well. And with that you can use things. So with, you can use stuff like leaflet and if you have to talk to, you know, ArcGIS server or want to use things like that, we've got, there's Esri leaflet which is just another communication layer that lets you talk to, it knows how to talk to the ArcGIS servers and then bring it into a visualization on leaflet which is, gives you a nice interface to use with that. One of the things you can also do is use Terraformer and, I cannot find this stuff. Anyway, that you can use IndexDB and local storage to persist your stuff. So you could pull down a dataset and then it'll get stored in local storage. You could be disconnected and still be able to show it later. So even stuff that's been thought to be fairly complex and that you wanted to do on the server of like taking points and triangulating them into tins and then displaying them, this is a simple example on using D3. But you can use another library called Delaney Fast which you can find and use it directly as stuff like leaflet or open layers or the Esri ArcGIS API and get the same kind of speed and performance which unfortunately I was going to show some of that. But my live demo is not working at all. Well, without my demo I'm kind of done. I'm sorry. Anyway, is there any questions or thoughts or comments? Yes. Yeah, there's size limitations and it depends on the browser and the device. So, you know, if you're like on a desktop versus a mobile device, those are different. Some things like will let you request more space, other things, they will, other browsers will actually make you like destroy the database and do a new one. So, you have to worry about that. Limitations are generally reasonable unless you're storing like lots of data. So, you could get away with a pretty large amount on indexed DB which is significantly more than you could do in local storage. Sure. You said the SQLite web workers that they had some limitations as to what you could actually give them, what data they could work on once you invoke them. Is that simply numerical or is that just primitives that it can work with? No. So, you can pass pretty much anything over to a web worker. The things that you can't pass are DOM elements and the canvas element. So, your native DOM elements, you can't pass those over because there's no DOM over in the browser. I mean, sorry, over in the worker. But anything else you can. So, you can pass it any of the primitives. You can pass it any of these strongly typed arrays and you can pass it blobs. So, one thing you could do is, and that's part of my demo, you take a file so you can have a file input and open a file or just drag it to a drag and drop area. And then you can pass that directly over to a worker and then it can work on it and pass the data back to you. So, that's a, it's nice, it's really nice to have that ability and it makes it so you're not tied to having a server anymore. Just to display and manipulate data to some degree or having to use the desktop application. And if you're going to do, you know, several million geocodes or, you know, you're buffering and coalescing, you know, large geometries and stuff, probably are performance-wise going to do a lot better doing it on a desktop or possibly even server side. So, for moderately complex stuff, simple to moderate complex, the browser has gotten good enough where you can do a lot of interesting stuff in there. Yes? So, I was wondering how that clustering works? So, because this is a client-side library, so you still have to bring all the data over to the client before you do clustering, right? So, you have to reduce the network traffic or... Right. So, there's some strategies for helping out with that and one is to bring that data down and then calculate the clusters over in a worker thread and then send the clustered features. So, the cluster, like, bounding box and its information over back to the main thread. But you're right, it doesn't... It's not... You're not... You're still having to pull that data down. So, if you're doing... You can stream data through web sockets and other things like that. It just depends on what the server is supporting and if you've... So, if you've got as a server that will support streaming, you can also improve your performance that way too. Could you say a bit more about the implications of the Delaney triangulation? I kind of missed how that connected to everything else. So, I was an example of something that's relatively complex and that you could now do efficiently in the browser. Just as like something that even two years ago you wouldn't think about doing without doing it over on a desktop application or saying it server side. So, that's what that was and I had a... I demo had points that were going to be triangulated in shown but unfortunately never did open while we were here. Any other questions? Go ahead. The demo you had which you dropped a file with a SIPT shape file. As I understood, that was supposed to go to workers. Yes. All right. So, is there libraries for unsipping files or do you have to implement that yourself? Yeah, no, there is. There's a shape file JS which understands how to read shape files and then how to unzip them as well. So, you can use that and it's really nice. There's some other interesting things that people do with workers is relate to stuff like that that is just dealing with binary data. It's like creating animated gifs from... There's a thing where you can actually just select a number of gifs from your computer and then it takes those into a web worker, creates an animated gif and gives it back to you. And it's all right there in the browser. So... Okay, so, sorry. Sorry your demo didn't work but the idea of like pushing the zip file out to a web worker and sort of doing that in parallel as the page is loading is really awesome. That's an awesome idea. Do you have any URLs you could share with us for the zip code or the code for doing that so we can look at it offline? Yeah, well, I was... So, I'll put the demo on my GitHub repository and also tweet that information out. So, it's Mprio is my GitHub username, so github.com.mprio and it's Matt Prio on Twitter. So, that first example that I showed, that's actually from Calvin Metcalf and... leaflet.CalvinMetcalf.com and you can drag and drop shape files and topo.json and json onto the map and it'll display it. So, I took that example and then added some other stuff to it from the demo. Yes. If there are no more questions, I just wanted to make a quick announcement. If you were in this track for the whole time, you saw how crowded it was and if you're interested in the same track, track four for after the break, we've decided to swap rooms. Track four and track seven will be swapping rooms because it's a little bigger over there, a little more seating. So, just an FYI. Track seven is around the corner here, 143-144. It's a double room. Thank you.
|
Long gone (hopefully) are the days of replicating the "professionals only" desktop GIS interface in a browser. However, with modern browsers, HTML5 APIs, and increased efficiency of javascript engines it is possible to performantly replicate GIS functionality in a purely client-side browser application. Moderately complex geoprocessing, persistent client-side storage and simple to complex data visualization are all possible now. We walk through the underlying technology and demonstrate the practical use of it in an open-source sample application. Technologies covered include IndexedDB, WebStorage, Workers, Strongly Typed Arrays and Canvas. Some attention will also be paid to performance limitations, browser support and polyfills for older browsers.
|
10.5446/31643 (DOI)
|
Good afternoon, folks. Thank you all for coming late in the day. My name is Micah Wengren. I'm with NOAA, the National Oceanic and Atmospheric Administration, US Federal Agency. On behalf of my co-author, Jeff de la Bojoudière, the NOAA Data Management Architect, I'm going to be discussing the topic of supporting open data with open source. OK, that looks better. Thanks. So moving right along. So this talk is kind of divided into two different parts. And the first segment, I'm just going to talk a little bit about the background on the topic of open data, and I guess what it means in the context of this presentation, and also in the context of the US federal government at this stage. So back in the middle of 2012, there was a presidential memorandum released, federal government-wide, entitled Building a 21st Century Digital Government. And the real message of that was trying to codify some specific ways in which the government could increase usage of its services and also just improve the overall digital experience of the citizens of the US. So that was intended as kind of a broad umbrella document with more specific follow-on policies to come later on. Most relevant here is what's called the Project Open Data or Open Data Policy, which followed last year in May 2013 in the form of an executive order titled Making Open and Machine-Readable the New Default for Government Information. So this was a specific policy that had some requirements placed on federal agencies and departments to release their data where appropriate in open, interoperable formats with open licenses as well. So the main message of this policy was really just to treat government data and investments in government data, I guess, as an asset. So recognizing the intrinsic value of those investments and the intrinsic value of the data itself. So the policy actually cited a few examples of historic releases of open data by the government. And those included both the GPS system, which I think is particularly relevant here. Everyone knows of the value of GPS in our current lives. So that initially was a private closed system developed by the Department of Defense. It was released in the early 90s when it was completed for public use. The second example is actually weather data that's released by my agency, NOAA. And as NOAA has traditionally been an open data agency in that regard. And in both cases, there are really large industries that have been built exclusively off of that data. So crafty developers and entrepreneurs who have innovated and created value added services on top of the data. So really the core of the Project Open Data executive order is to delineate a specific metadata schema, which consists of both a vocabulary and a data format for describing data sets that an agency releases. So the format used in the policy is JSON, which we're probably all familiar with. And the vocabulary is sourced from what had been previously common geospatial metadata or other metadata descriptive vocabulary words. I should also mention that the schema itself is released on GitHub in the spirit of open source. So the creators of the policy really wanted to embrace the spirit of open source and to get input from both users of the actual data and the schema, as well as implementers like federal workers ourselves, such as myself. So a little bit more detail on the actual files themselves. The executive order essentially mandated that each federal department lists its open data at a particular prescribed location on the web. So the public users could count on accessing these data.json files, sometimes very massive files, just a word of warning. Don't try to parse them at home on your 486 or something. So basically the policy dictated that these be published to a particular URL. So there was some consistency there. And I also wanted to, I don't know if that's really visible, but this is just a small example screen capture of part of one data set that NOAA produced to comply with the policy. And I also listed a few of the schema elements. So if you're familiar with geospatial metadata, you can get the idea that there's some carryover between some of the common language used there. So in order to meet this mandate, NOAA, as I mentioned before, has traditionally been an open data agency. And we're comprised of several data centers who have been releasing data available for free online for a number of years and who have, as a result, developed their own catalog systems, have their own inventories to kind of facilitate that data access. However, we needed a way to essentially merge that existing information into a single output file, this data.json file, which would then be fed up the chain to Department of Commerce. NOAA is actually an agency under DOC. And in order to do that, the decision was made to deploy a centralized data catalog that would be able to harvest from these existing remote catalogs. That catalog is based on CCAN software, which is open source. And it was actually a collaboration between NOAA and the Department of the Interior through an existing interagency working group called the Federal GeoCloud. To kind of co-develop these systems, one to be deployed for Department of Interior use and then NOAA to deploy around. And the way this system works is first by harvesting the remote data inventories and making use of a plugin that's been developed for CCAN related to project open data that can handle that translation from the native metadata format to data.json. So just a little workflow diagram, I guess, of what the catalog does takes in the existing data and does that translation. It also adds, in addition, the benefit of a CSW endpoint for query and data access, as well as a native web API that CCAN provides. So that's kind of some context, I guess, for the rest of my talk. What I want to focus on is a particular full open source stack that we're, I guess, experimenting with deploying in NOAA. It's not necessarily operationally used at the moment, but I just wanted to take some time to kind of illustrate how a few well-known open source projects that we're all familiar with here can work together in compliance with project open data. So the first of those is GeoServer. So we all know that's a spatial data hosting platform for OGC services. The second is GeoNode, which is essentially a web-based Geo spatial content management system that's built to sit on top of GeoServer and kind of provide a dynamic, modern user interface to allow users to access or to discover and access the underlying GeoServer services. And last, of course, is CCAN, which I've spoken about before. So NOAA's background with GeoServer, historically over the years, GeoServer's certainly been used kind of piecemeal in different offices in the agency, along with other open source spatial data hosting systems. However, it hadn't really been used as an enterprise-wide solution until 2011, 2012, when the NOAA High Performance Computing and Communications project chose to fund basically a project to set up a prototype GeoServer that could be deployed agency-wide and used by individual office data providers who maybe didn't have the resources to run GeoServer themselves, who could just rely on a shared solution to publish their data. So funding through that project was provided to Open Geo to provide a few enhancements to GeoServer, the first of which was to kind of finalize some work that had been done on the security subsystem to enable some enterprise integration capabilities like LDAP authentication. And the second was having first class support for isolation, so essentially improved user management permission system so that you could restrict users to only have access to their information and not across the board, which is obviously essential for an enterprise deployment. So as a result, the NOAA GeoServer hosting environment has been online for about two years for testing and evaluation at the URL listed here. This has been a prototype and wasn't really planned for operational transition. However, just wanted to highlight that this past year, the Weather Service, as part of their integrated dissemination program, chose GeoServer alongside Esri ArcGIS server for production geospatial hosting service. So there will be some production web services running off of GeoServer and NOAA in the near future, which I think is pretty cool. So I'm going to kind of step through this stack, this open source, open data stack that we've been testing out. So the first layer, I guess, obviously is GeoServer. This is a bit of a simplification. GeoServer provides many additional service types. I just wanted to highlight WMS and WFS, which is what we've primarily used in our incubator prototype system. And of course, post-GIS should also be mentioned, because that's the post-GIS and post-GRESQL. That's the underlying data storage backbone for our GeoServer instance. And it's also used in each other component of the stack as well. So I guess the second tier in the system is GeoNote. And for those who aren't familiar, GeoNote is a web based geospatial content management system. It's really pretty tightly coupled with GeoServer. So you essentially pair a GeoNote instance with a GeoServer instance. And it gives that kind of modern web user interface. It's really good for data discovery. It has fine-grained permission controls and other things. So NOAA's history with GeoNote goes back a few years as well. It was actually included as part of the Federal GeoCloud interagency working group in 2012. So a NOAA group had a proposal accepted to participate in that, which basically had set up a shared infrastructure for transition of agency-hosted geospatial services to the cloud, to Amazon web services. So we collaborated with them, tuned the system a little bit so GeoNote would run on it, and have kind of been tinkering with it ever since, I guess. However, even though our NOAA node system isn't publicly deployed yet, partway through the project, the Department of Energy came along and decided that they were actually interested in using GeoNote. So they were able to essentially use our infrastructure as a starting point and deploy their own GeoNote-based system called NEPA Note at the URL below. And that's related to the National Environmental Policy Act. So I'm going to kind of step through some quick highlight of GeoNote features for those who don't know. So this is a screen capture. It kind of brings to life individual data layers within GeoServerService. So user can go to the GeoNote site, search by some common fields such as title, abstract, can do filter by isotopic category keywords. And of course, if there's temporal information, they can filter by that as well. GeoNote also includes an integrated CSW service. This is critical for this overall stack design, as you'll see later. By default, that's based off of Pi CSW currently. But it can also be kind of plug and play with other systems. So if you want to use something like GeoNetwork, that's available as well. So that provides a good connection point with desktop GIS. So for a QGIS user who is using the Spatial Search extension or any other extension that can talk to a CSW service, Esri ArcMap as well, that's a great data discovery tool. So data access, GeoNote, as I mentioned, is pretty tightly coupled with GeoServer, so it understands the output formats that GeoServer provides. So once a user has logged in and found the data that they're looking for, it provides a convenient endpoint list. It's very easy to download information directly. Additionally, there's kind of two different ways where you can upload data to GeoNote. So it can be configured so that a user can log in through the web interface. If they have a Spatial data set they want to share, they can interactively basically push it to GeoNote, fill out some relevant metadata, and GeoNote will push it back to the GeoServer level automatically. There's also the opposite approach, which is actually taking data from an existing GeoServer and sucking it into GeoNote. So either way, once your GeoNote instance is populated with data layers, you get some neat capabilities. There's an integrated metadata editor, which I have here. So if there's some information lacking from the native metadata, you have the option to fill it out via the user interface. There's also pretty fine-grained access controls, so you can share data with particular users if you want, groups of users, or you can just publish publicly as well. Very recently, within GeoNote, there's been some work done on some pretty cool new features, first of which is remote services. So GeoNote is really meant to run off of GeoServer. However, it does have the fledgling capability to connect to a remote RGI server endpoint and be able to parse layers from the REST API for RGI server, as well as remote WMS and WFS servers and some others. Secondly among those is GeoGet. So for those who don't know, GeoGet is very similar to Git. It's basically a versioned editing for Geospatial data. And with some recent work done through some GeoNote partners, GeoNote provides kind of a read access. So if you configure GeoServer with a GeoGet repository, that edit history for your Geospatial data can be read by GeoNote displayed within the user interface. And there's also an external mapping client called MapLume that will actually handle the editing side of it as well. So if you have a spatial data set, you can configure a GeoNote instance to work with MapLume and provide disconnected editing and also two-way sync with a remote GeoGet repository. So it's pretty powerful data editing workflow. What did you call it in here? That's MapLume? Yeah, there's actually a presentation tomorrow or Friday. I forget which. So check it out. So the last feature that I wanted to mention in GeoNote is Maps. So of course, once you populate it with this variety of layers, you can create this integrated map mashup, have the same permissions to share it with users who you choose. So getting back to the architecture diagram, GeoNote, NoaNote is our Noa-themed instance. Sits mostly on top of GeoServer. It talks to GeoServer via the REST API and adds that CSW endpoint for data discovery, as well as the interactive catalog. So moving along CCAN, so via that CSW endpoint in GeoNote, that actually allows CCAN to take that as a remote harvesting point. So as I mentioned before, in our NoaData catalog, we are harvesting several remote catalogs currently. And via the GeoNote CSW, that integration can happen there as well. So any layers that you include in your GeoNote instance can be automatically harvested by CCAN. And there's maybe some similarities between the two products. CCAN takes a little bit more of a data catalog approach to presentation. It does a good job of parsing out fields from spatial metadata and presenting it in an approachable, user-friendly way. It's good at parsing out online resource linkages, so users can have direct access there to the endpoints you want them to use to access your data. It's also pretty efficient in terms of search. So it has a back-end Apache Solar instance that can be configured to handle spatial search as well. So it's pretty powerful. And it kind of really sits alongside GeoNote in this system. And of course, it can handle the data.json translation. So if that's of interest to anyone out there, especially federal users. So the other thing that I wanted to mention is CCAN has some interactive mapping capabilities as well. So if your GeoNote instance provides or really any spatial metadata provides a WMS get capabilities endpoint, CCAN has a native map preview tool. So good interactive capability there. GeoNote actually has to be a little bit modified. I had to tweak it myself to provide this capability, but that's something that hopefully will be merged back in with the core at some point. So our diagram here, data.noa.gov is our logo for our CCAN instance. So you can see I put them side by side. Really, it just kind of complements your existing GeoNote site. Provides that remote harvest capability as well as integration with any external catalogs that you may want to use. So lastly, data.gov. For those who aren't familiar, this is sort of the hemmeth of US federal open data. This is the federal government's open data catalog. It's also a CCAN based. And it works very similarly to the NOAA data catalog. It does remote harvest of all of existing kind of a whole variety of federal geospatial metadata sources. I think the plan is to have it at some point exclusively harvest the data.json files. I don't think that's quite implemented yet. But nonetheless, via one means or another, it's the merged collection of all federal open data, according to the open data policy. So again, it's kind of just a bit alongside the core of the stack that I wanted to highlight. But nonetheless, in the federal space, data.gov is certainly important and based off the sum of the same software. So just a few take home points that I wanted to make. Hopefully, I've kind of shown how these open source technologies can be used together to create a full open data stack for geospatial data that complies with federal open data policy, if that's of interest to you. NOAA, as an agency, is trying to continue its role and leadership in the open data world. We're keeping up, of course, with the latest policies as much as possible. Lastly, I think really getting back to the original slide, which I mentioned, the digital government strategy, one of the main goals of that was to develop a shared platform for federal IT infrastructure. And I think the work that's been done on CCAN related to the open data policy really kind of illustrates a good example of leveraging open source software. So I think if you really read the digital government strategy that way, it's really kind of encouraging not only the use of open source software, but also contribution. So as a community of IT users in the federal government, why shouldn't we work together to kind of develop a common product on our own and collaborate, as opposed to sit around and wait for someone else to do it, or to go out and buy the same thing many, many times? Doesn't really make sense. So lastly, I just wanted to mention a lot of this work that I've been involved with over the last few years wouldn't have been possible without the support of a guy by the name of Doug Niebert, passed away this year tragically. But really, a lot of this and a lot of other advancements in the federal geospatial space wouldn't have been possible without Doug's leadership. So I just wanted to give credit where credit is due. There's a lot of debt of gratitude to him. If anyone has any questions, I'd be happy to try to answer them, or you can reach out to either myself or Jeff via email or Twitter. First, thank you. Yeah. Yeah. Thank you. Thank you. No questions at all? Yeah, apparently you seem to. Oh, yeah. I guess one. You know, has anybody created AMIs for GeoNode and C-Can that are publicly available? There must be. OK. Yeah. I don't know for sure. Yeah, the work done with the GeoCloud, I think there's no reason that it couldn't be just baked into an AMI. But I don't know for sure. But I'm guessing they have to be out there. I'm just like holding the mic. This is probably going to expose my ignorance. But how does the GeoJSON deal with Raster data sets? Oh, the data.json? Yeah. Yeah, so it's really leveraging JSON as kind of a metadata format. So in terms of actual encoding of spatial data, it doesn't really do that. So it's probably should have made that graphic a little bigger. But it basically provides the associated metadata to the data set, along with, say, like an access URL. So whether it's just an open data set published on the web, or if it's a web API, it'll contain that link. But in terms of actual data encoding, it doesn't really do that. Anybody else? I'm not familiar with GeoNode and Ccan as much as I'd like. But when would the users use one versus the other, since you're standing about both? Yeah, that's a good question. I mean, I think really Ccan is kind of a good entry point to the actual data, or to a data set that exists in GeoNode. So if you do that connection, the CSW connection between the two, one thing Ccan does well is it's indexed well by Google. So for instance, if someone does a Google search, they can find the page on a Ccan site and then be directed to GeoNode to kind of have the more interactive mapping capabilities. So I think it would kind of flow that way, most likely. Maybe I could add a note to this. There are two different worlds, the so-called open data world and so-called open geodata world. They are both on a little bit different standards. They kind of don't communicate to each other that well yet. And we have to talk to the V as GeoGuys should talk more to open data guys to connect those worlds more in a standard way. OK. I assume everybody's looking forward for the next session, which is called Drinks in the Hall, I think. So thank you for coming. Thank you.
|
Within the US Federal Government, there is a trend towards embracing the benefits of open data to increase transparency and maximize potential innovation and resulting economic benefit from taxpayer investment. Recently, an Executive Order was signed specifically requiring federal agencies to provide a public inventory of their non-restricted data and to use standard web-friendly formats and services for public data access. For geospatial data, popular free and open source software packages are ideal options to implement an open data infrastructure. NOAA, an agency whose mission has long embraced and indeed centered on open data, has recently deployed or tested several FOSS products to meet the open data executive order. Among these are GeoServer, GeoNode, and CKAN, or Comprehensive Knowledge Archive Network, a data management and publishing system.This talk will focus on how these three FOSS products can be deployed together to provide an open data architecture exclusively built on open source. Data sets hosted in GeoServer can be cataloged and visualized in GeoNode, and fed to CKAN for search and discovery as well as translation to open data policy-compliant JSON format. Upcoming enhancements to GeoNode, the middle tier of the stack, will allow integration with data hosting backends other than GeoServer, such as Esri's ArcGIS REST services or external WMS services. We'll highlight NOAA's existing implementation of the above, including the recently-deployed public data catalog, https://data.noaa.gov/, and GeoServer data hosting platform, as well as potential build out of the full stack including the GeoNode integration layer.
|
10.5446/31645 (DOI)
|
So, if you were here for the presentation just previous to mine, you're in luck because he, Aaron laid a great foundation for my talk. It's actually dovetails perfectly. So, that, I don't know, we weren't colluding. Maybe it's just the great organizers at PhosphorG, I don't know. But my presentation is on managing public data on GitHub or paying no attention to that get behind the curtain. So, it's kind of about, you know, GitHub and government. And sometimes, well, you know, if you read Vlad's talk from Leiflet earlier today, he talked a lot about experimentation and creativity. And I think that's something that is really needed in the government sector and that's something that Code for America is really stepping up and helping out with. But I think there's always room for that. And so, this presentation is really kind of looking at one case where I tried to experiment a little bit with GitHub and managing this process. And it went pretty well. Nothing exploded. So the context. I work at, oh, wow. That's not good. Okay. There we go. I work at the Atlanta Regional Commission which is regional government. So basically, we build consensus with all the smaller local governments, counties and cities. And we have a lot of counties in Georgia. We're notorious for that. So there's a lot of partners to manage relationships with. And so the story behind this project was each census. So every 10 years, the urbanized areas are redrawn according to population change. So people move from Atlanta to San Francisco or, you know, all these different places and they have to redraw the areas. And along with that, the Federal Highway Association asks for updated roadway functional classifications. And if you're familiar with OpenStreetMap and OSM speak, functional classifications are just, you know, the highway tag and then any of those motorway, trunk, primary, secondary, residential. But in FHWA speak, we have things like interstates, other freeways, principal materials, these other classifications that basically mean the same thing. And there's some funding tied to these classifications, but it's really just a way for the feds to keep track of all the roads that exist out there. And so the problem that we were facing was at the beginning of this process, we were like, okay, how are we going to manage all these stakeholders, all these local governments? You know, we have maps we can print out and people can draw on them, get around the table and kind of do a collaborative thing, which is awesome. We have forms that people could fill out and we can send people, well, Microsoft Word docs and have them send them back or scan them. And then we'd have a pile of paper to go through and then basically have an analyst go in and tag all the, you know, try to figure out which road they're talking about, find it on the map, tag it, change all the functional classifications, really kind of a daunting task. And, you know, so it's this process that we didn't really want to manage. And when I was in the meeting, when they talked about it, I was like, there's got to be a better way to do it. But I had to kind of demonstrate that. So we like that first part, but we don't like all the forms and having unstructured data and turning that into data structures. And so, you know, this is really kind of a huge process. It starts at the Federal Highway Administration, which is obviously national in scope, communicating with all the DOTs, the state DOTs, which GDOT is the one that we kind of work with. And then GDOT communicates with all the regions in the state and then all the regions kind of coordinate locally. So it's this huge process. And there's no telling how many different ways people are doing it around the country. So, you know, why not, like I mentioned OSM tags earlier, why not just use OSM? Well, there are great tools for editing OpenStreetMap. And that was one of the, you know, we were really considering doing that. Or I was in my head. But the kind of motivations for not using it outweighed the great reasons for using it. So basically, we have this DOT road network that we really wanted to kind of work with and attach all those new attributes to. We wanted to have build out a custom interface pretty easily. We wanted to track changes with the functional classification numbering system. And we wanted to manage this whole approval process so that, you know, it kind of filter up the hierarchy. And, you know, we first tried to kind of figure out what happened with the last census 10 years ago. But because this process happens every 10 years, you can imagine that the tools are going to be different, the people are going to be different, people probably just forgot how it was done 10 years ago. We don't really know. But this form was kind of proposed as a way to actually manage the process. And I don't know about you, but I would not want to have a pile, like a stack of these forms and kind of going back and forth between a road network and doing all this manual entry. So, you know, we, GitHub was kind of what we used. So why do we use it? So Aaron, you know, talked a lot about some great tools that are out there that I think kind of were inspired by GitHub kind of starting to render GeoJSON. We have GeoJSON.io, which is an amazing editor for, you know, pulling up GeoJSON or other shape data and then really easily editing it. And that's all run on GitHub Pages, this great free hosting that you can do. So GitHub Pages is really the curtain in this talk, right? We're kind of keeping GitHub behind this curtain that is GitHub Pages. It's just a web app that's running on GitHub and taking advantage of the GitHub API. And so I said it's really crucial in Gov because sometimes when you're trying to convey an idea, just using words isn't really enough. And by really quickly getting a demo up onto GitHub Pages, you can convey a message like, this is something that works and, you know, we could possibly use it in this functional classification overhaul. And then issue tracking is another great thing that we want to take advantage of. We have to coordinate with all these local governments and we wanted them all to be aware of what changes other people were proposing. And so getting that integrated communications with the tracking, issue tracking, is really important. And we wanted to possibly share this with other agencies. I mean, that's why I'm here today. That's, it's all open source code. And there's this fork and go mentality which Jessica Lord kind of seemed to have come up with, which is basically, you know, you put something up on GitHub and then GitHub really easily allows you to fork projects and make your own copy to kind of customize for your own situation. So this is a continuum that I created. Just further justification for why we needed this curtain around Git, which is kind of scary, right? You know, on the left-hand side we have familiarity with Git and on the right-hand side we have, you know, your subsequent kind of relationship to it. So the first, at the very top we have what's a version control system, which, you know, I was at about a year and a half ago. We have novice and expert. And so it's kind of like, I guess, stages of grief or something. You have kind of indifference. And then, so you don't really care about Git. Why do I, as a person who can live my life normally, why do I need Git? And confusion, like, I don't really understand what's happening. Fear, panic, discomfort, reliance, and, you know, you ultimately grow to really depend on Git. So I'm kind of in this panic, discomfort stage probably. You know, sometimes I waver depending on what I've done wrong as far as committing data to Git. The originator of Git is probably at the expert level. But the majority of the target audience for this project is at the indifference stage. And if you started talking about Git, it would lead to confusion most likely. And so there's a lot of battles that you kind of have to overcome to get people comfortable with this thing that is really unfamiliar. And so, you know, that's kind of, you know, and there's been a lot of work by GitHub especially to kind of make Git permeate other areas besides just software development. I think that's amazing. But I don't think we're quite to the point where your average Joe is going to recognize what Git is. So the workflow for this project, so we had this G.road network that we wanted to submit changes to. So we got that from them in a shape file version. We did a ogre to ogre conversion to GeoJSON and did some, you know, reprojection. And we, there's a gentleman earlier in the session who asked about file size constraints for GitHub. And so we actually had to split the, we had to drop a lot of roads from the dataset. And then we had to split it by county to keep that file size nice and small because these are huge road networks that we're talking about. And then we load that up onto a GitHub pages web app which I'll show in a second. And then we had government staff, you know, view the road segments and see what the existing conditions are and then submit change requests for given road segments in their jurisdiction. And so, you know, the other reason we split up the GeoJSON road network into counties is because we're dealing with multiple jurisdictions. And there was a lot of concern by, you know, say, for roads that are on the county boundary or that cross multiple jurisdictions. We wanted, you know, our partners to feel comfortable that they were submitting changes only for their jurisdiction and also kind of, you know, just kind of keep that collaboration alive. And so one of the things we had to do was split up the GeoJSON into these counties. So it was kind of a multi-purpose reason for that. And we actually, you'll see this is a screen capture from GitHub and it shows a few of the different counties. We basically created teams that correlated to counties. And so all of the cities or counties that corresponded to a given county, we would assign them to that team. And that's how they kind of had access to the data and then could submit change requests. And so without further ado, I'll just kind of open up this application. So it should be showing up, yeah. So this is the application we built. It's, you can see it's running on this atlregional.github.io. That's the GitHub pages domain. And basically we have a map over here. And it's got map box tiles on underneath it. And you can see this key, this legend is kind of janky because the screen size is off. But we can, I built some basic tile layers to, in tile mail to help people kind of see what was out there. So this is like, you know, really kind of messy, but when you zoom in it looks a little nicer. But it's just showing all the different classifications we have. We have like, you know, city boundaries just to people, so people know where they're working from. We have urbanized area that's not very helpful, but people requested it. And so when you log in, as I'm going to do right now, you're basically assigned to a county or collection of counties if the jurisdiction you're in spans that. So these are Fulton and CAB counties. And that's the data I have access to. And in reality, if I go to the GitHub page and I want to, you know, create a pull request for any of the data that's on that tool or on this repository, I can do that. But our users are not about to open up a command line, interface, and type, get commit or get add and all that stuff. So this is really their way to interact with that data. And so you can click on road segments. And this is just showing kind of general leaflet functionality. And when you want to suggest a change for that road segment, this form pops up where, you know, you give the name of road because that's not really an attribute that the DOT maintains. You, you know, say you want the entire segment that's highlighted or just two and from intersections. You can see the existing functional class. You can see if it has volumes, so like traffic counts for the road, you can see that. And then you can kind of propose a new functional class. And so we also have kind of a list of all the proposed change that has been created by using that same form. And so you can see that this is just kind of like a data table that's searchable, tried to make it as user-friendly as possible. But these are basically just, if you're familiar with GitHub, they're issues on GitHub that we're just using the API to pull all that information, kind of format it in a way that's familiar to people. And when you view it, it loads that road segment, kind of gives all the description and justification that the user submitted. And so it's really just kind of a wrapper around GitHub focused on this particular application. So I'm going to hop back to the presentation here. Here we go. So that's the app. You might have seen when I pulled up one of those issue boxes, this, you know, change status, all these little buttons right here. And that was just really taking advantage of the milestones feature on GitHub. You can set milestones for different, you know, issues and if they're like bug fixes or feature requests or, you know, this issue is going to be in version 1.0, we just re-adapted that for are these road requests, are they approved, are they advancing or in review? So we're kind of adapting the GitHub interface for our own purposes. And then we also built this another tool that allows you to basically compare the existing conditions, which is the map on the left, with all of the different categories of issues. And so right here I think that's, I can't really read, I think that's advancing. So it shows all the road segments that are advancing. And you can do the same kinds of tile overlays to kind of get a picture of what the whole network looks like region wide. So we also wanted to pull data down because inevitably there's going to be some work on the back end to, you know, figure out which requests get approved, which get, you know, kind of push down the line. So we have an export button that just exports all that issues into a CSV. And also if you wanted to actually work with it in like ArcMap, which our agency has, because it's not in a GeoJSON format, you kind of have to, you know, use desktop GitHub or Git pull and pull that down and then re-project it into or change the format into a shape file. And so this is really the project timeline. We're at the tail end of it. So on the left were the y-axis we have the number of change requests and on the x-axis we have date. So you can see the first phase of the project was really training. And that was pretty much for the month of May. So we had a few requests put in in that early stage. And then the bulk of the requests actually came at the very end of the submission period. Surprise, right? Like you send a reminder email and a flood of issues get pulled that very same day. So, you know, at the max we had 257 in a single day. That was like the Thursday before the Friday deadline. And then you see a gap in the weekend, right? And then the review period is kind of what Sage is wearing right now. So it's kind of, you know, reviewing the whole network, all the changes that people have made and figuring out what makes sense, you know, what doesn't. Like is this like local residential road actually need to be upgraded to an interstate? No. It's actually a local road with speed bumps. It's not an interstate. So we're in that review process right now. So, yeah, deadline on Friday. And so the total change request we had was about a thousand, which if we had managed that through a stack of paper forms and trying to convert that all back into spatial data, that would have been really challenging. And this is just a summary of change requests. It's kind of hard to look at. But basically, it shows, you know, the approximate number of changes requested. And we actually, I kind of put this here because we had 20 submissions that were indicating no proposed functional classification. So they just didn't fill out the form all the way. And really that ended up being just a form validation. We weren't doing checking, like, making that a mandatory field. And so we had some errors as a result of that. And the other thing to point out here is we had a huge number of local roads that were proposed to be upgraded. And, you know, from a data standpoint, that's fine. We were like, okay, that's, we'll just manage that process. There's a lot of changes, but that's okay. But actually, we found out later from the DOT that that actually means that there's going to be a lot of additional data collection on their part because they have to go out and, like, you know, figure out what the roadway design is like and do traffic counts. And so it comes with a lot of funding questions and a lot of challenges, too. So the kind of overall challenges, one of the big ones was source data. We had used this data set from the DOT, but it was unfamiliar to us. Some of the segment divisions were really strange. So it was a lot of figuring out what we're actually working with. And I mentioned earlier that we had to pull a lot of roads out of the network in order for GitHub to serve it properly. That's because about 80 percent, somewhere around that, maybe a little less of the entire network are local roads. And so we actually had to pull all that out. And then we ended up just having kind of the next classification up all the way up to Interstate because we didn't want to deal with the question of all those local roads. And so we would have people kind of send us into, like, batch requests to add in local roads, which ended up being a lot of manual work. Users, a lot of these users, they're all government staff and a lot of their agencies are still running on, like, you know, pretty old Windows XP Internet Explorer. And so, you know, trying to get people to download the latest browsers to make this application work was really challenging. Sometimes we did have to show the bones of GitHub, especially when people wanted to make comments. And so that would occasionally confuse people. And also the procrastination issue, I mentioned the deadline of all the change requests coming in. That was, that posed some personnel issues, like trying to manage that whole process and instructing people how to open the application. And, you know, really the volume of change requests continues to be a challenge because we have these additional data collection efforts that are acquired by the DOT and also just kind of figuring out what makes sense from a network perspective, like going through all those, even in a GIS format is challenging. So moving forward, we're continuing to process these change requests. And really, really the goal is to make a cohesive network. Something that makes sense and reflects actual travel patterns. And we're considering the framework for other data sets. For instance, the Transportation Improvement Program, which kind of manages a list of projects that are in the hopper for the, like, five-year period. And, you know, we'd love to hear ideas from others. I think hearing what Aaron is working on is amazing and hopefully others are kind of experimenting with this stuff. But I think that's kind of the crucial theme, experimenting. And, you know, it worked for our process. It might not work for other processes, but we're always curious to hear from others. And this is at the Oregon border. Awesome state to be in. I'm really happy to be here, but happy to take any questions right now. There. Great talk. I was just curious. I'm a little fuzzy on how you're getting the information from GitHub back to the GIS. Like, are all those attributes coming over automatically or are the edits being made in the GIS? And this is just tracking the... Yeah. So basically, we used GitHub's API to read the data from GitHub pages and then to write back to it. So in the GitHub pages branch, we were writing data to that file. And then to get that back into like a GIS environment, we would just pull that, all that data down and convert it into a shape file and then you can kind of work within an ARC map. Yeah. Yeah. We're just kind of like adding attributes to the GeoJSON, all those polylines. Actually, hopefully it helped answer that question. So we actually built... So I worked for Esri. We actually built an open source library called Coup, K-O-O-P. Have you seen it? It's... I have. It hopefully solves this problem. The idea was it's no JS. You point it at like a GitHub repository and it can serve it out as feature services or shape file. So someone can go and live an ARC map and pull data from GitHub or vice versa. If someone has a server, they want to use GeoJSON. You can deploy that as well. So that might help solve that. Keep the data in GitHub, but I want to pull it out into a GIS or other formats and I have to download it and convert it by hand and things like that. Yeah. That was one of the problems we had because everybody has ARC map, but kind of getting GitHub installed on their machines and QGIS, there's a lot of overhead with that, like going through IT and all that. So that's good to know. In your timeline, it looked like the whole review process was just a couple months. How was... How long was the whole process of getting the infrastructure set up for this? So look back at my commit history, but it kind of got started. Like we were kind of informed by GDOT that we needed to get started around, I think, like February around then. And then so I was like, hey, I think I have a solution for this. And everybody was like, well, let's see it. So it was kind of a rapid development cycle. And then so it kind of, I think, started around like end of February and wrapped up in May. So, yeah. Have you gotten interest in this type of project from other city governments or other government organizations? So GDOT was... They were actually really interested in the process. They're not really at liberty to kind of impose any process on other regions in the state. But I think in the future, they had interest in possibly using this or some iteration of it. We haven't really reached out to other MPOs around the country, but I think that's kind of the next step. And it's not really clear what the best form is for that. I am speaking at the Association of MPOs in like October. So I think that's really going to be our chance to share this more broadly with that community. But, yeah. So you mentioned used Mapbox for the map that we saw and the data that we saw inside of the GitHub application. Was that your choice or does GitHub integrate directly with Mapbox in some way? That was our choice because we really wanted the kind of tile overlays in there. And the easiest way to get that up was, for us, was through Tile Mill and then to load those up to Mapbox. So there's not any like direct and because like the code on GitHub pages is just, you know, HTML, JavaScript thrown on there. And so it was just easiest for us to get up and running. I would say thank you for this time. Thanks.
|
The Atlanta Regional Commission (ARC) continuously solicits feedback on transportation data from local government partners. Historically, this process has taken the form of lots of markings on plotted maps with immeasurable amounts of manual work on the tail end to organize and interpret this feedback. Many tools developed specifically for this process today often fall short of the needs of agencies (such as geospatial presentation and tracking comments), yet the cost to develop or implement custom software is generally out of reach for government agencies.This presentation introduces a case study of the process to develop geospatial collaboration tools for managing transportation data directly hosted on GitHub pages (currently in development at http://atlregional.github.io/plan-it/ and http://atlregional.github.io/fc-review/). This approach was partially inspired by GitHub's recent features additions that make collaborating on geospatial data simple and elegant. Because these data span both functional and jurisdictional divisions, many of the greatest challenges have been project management related --- coordinating stakeholder feedback and project requirements. However, by utilizing the existing git/GitHub infrastructure, many of these requirements can be managed cost effectively. Moreover, the framework allows for direct integration with other application environments via the GitHub API and GDAL Tools, ensuring that local modifications to project data are committed back to the data repository.
|
10.5446/31648 (DOI)
|
My name is Anthony Fox. I'm from Charlottesville, Virginia. I work for a company called CCRI, Commonwealth Computer Research, and I'm here to talk to you about distributed spatiotemporal analytics specifically built on top of the Hadoop ecosystem and Acumulo distributed column family database. GeoMesa is a location tech open source project, part of a family of excellent projects that include GeoTrelis and Spatial4j and JTS and UDIG and a handful of other spatial capabilities under the Eclipse Foundation. So I didn't want to make any assumptions about what people know about the Hadoop ecosystem. So I'm going to cover just enough of it that the rest of the talk is not opaque to everybody. Just to show hands how many people have heard of Hadoop, how many people have written a map reduce job. Okay, so there's quite a bit of experience here, but I'm going to cover just enough for those of you who haven't. After covering the Hadoop ecosystem, I'm going to talk a bit about how Acumulo, I'm sorry, how GeoMesa fits into that infrastructure and how we enable spatiotemporal indexing on top of this distributed database. But then the primary part of this talk is a dissection of three example analytics and how they execute across the different components in a traditional Hadoop stack. So I'm going to try to get to that as quickly as I can without being too confusing. So the first thing we're going to cover is the Hadoop ecosystem. We're going to cover HDFS and MapReduce. Then we're going to talk about Acumulo, which is the database that's built on top of HDFS. And then talk briefly about some of the libraries that extend and simplified the process of developing for Hadoop. The stack that I'm going to cover and that's going to be exemplified in all of the analytics is essentially what you see on the slide here. So at its core, we have HDFS, the distributed file system. Acumulo is built on top of HDFS, as is other databases that are similar to Acumulo like HBase and Cassandra. GeoMesa is built on top of Acumulo and it also has plugins inside of GeoServer, which is shown on the right. And then the computational libraries that are on top of the Hadoop stack, there's you have your traditional MapReduce, that's the batch analytic processing out of the Google paper from years ago. You've got a fairly new capability called Spark that's gaining a lot of momentum. It's very good for doing low latency type computations in a distributed fashion. And then you've got Storm, which is the streaming analytic platform that's on top of Acumulo. It takes tuples in, in a stream, executes a computation and stores that wherever you want it to be stored. Kafka's shown on the left, it enables things like Storm within this environment. It's a high performance queuing system, message queuing system. But what we've done with GeoMesa is we have built plugins for GeoServer so that any access, any OGC access can come in through GeoServer and then be transparently executed on this stack. So any OGC client can make a WMS request which then distributes across the resources that are backed by this stack or a WFS query or a WPS request might come in and execute in one of MapReduce, Spark or Storm. And I'm going to talk through some examples of that. So first, HDFS and MapReduce. HDFS is a block file system. It takes very large files on the order of terabytes, breaks them into blocks of by default 128 megs but it's obviously configurable and then distributes those blocks across all of your resources. So it also replicates those blocks for redundancy and failover. The blocks establish data parallelism. So Hadoop is very good at data parallel computations and MapReduce is very data parallel. You send a map task, one map task per block, it executes on that block and if you have 100 blocks in your file, you're going to get 100 parallel tasks that execute. MapReduce is also predicated on associative computation. So there's in the MapReduce paradigm, you've got map tasks which go over your raw data and aggregate it in some form and emit results. And then you have a shuffle step which organizes your data, sorts it and then sends it through a reduced step which does the aggregation. The canonical example, the hello world of MapReduce is word count. So probably everybody that has written a MapReduce job has at least seen this or even written it. But the idea is that you have a huge text file and it's broken up into these 128 megabyte blocks. Each map task process is one of the blocks and emits all of the words and the number one associated with each word. And then each reduced task, each execution of the reducer happens against a single word and it aggregates the sum of all of those words that came out of the map tasks. The shuffle sorting phase happens in between in order to get to that single reduced step for a single word. And that's, and the summation is the associative operation there. It's the reduction. But more importantly for this audience is how neatly a heat map computation maps to the same paradigm. So let's say you have a huge text file of Wicked geometries and it's broken up into these blocks. You send out a block, you send out a map task per block, you send your computation to the data. The, for each feature in the block file, each line is a Wicked. You emit, you compute the world to screen the pixels that that Wicked impacts and you emit pixel in the number one. Then in the reduced step after everything's shuffled and the pixels are marked, you sum up all of the elements that hit a pixel and you have a heat map. It's very simple and it's very effective in a batch computation. I'm going to show how we've implemented that in a cumulo for low latency map reduce. So map reduce is a computational paradigm. It's not tied to the map reduce infrastructure that Hadoop has. You can do it in other contexts as well, including Spark. So how's that look? Okay. So a cumulo is a distributed database built on top of Hadoop. It was based on Google's big table paper which came out in 2007. That paper spawned a number of implementations including HBase, which I think is the most widely used column family database on the Hadoop platform. But Cassandra is another instantiation of the same concept. You get column oriented storage. It's a key value store. I'm going to go into more details about this in a second. But column oriented storage gives you nice compression and also gives you an arbitrary number of columns. Each row can have a different set of columns. You have a more schema-less structure, although it's not no schema. One of the primary constraints of these distributed databases is that they impose a single type of indexing capability and that's lexicographic indexes. Your data has to be sorted and the database will sort your data for you. And if you think back to the blocks that I just mentioned, it naturally conforms to that block level partitioning. So a block corresponds to a lexicographic subset of your key space. And that's one of the primary challenges with indexing spatial data in an accumulo column family database. I'm going to go into more detail about that. We leverage a couple of nice things about accumulo. Bloom-Firthlers help us to filter files that we don't need to access to satisfy a query or a predicate. And server-side iterators is something that I think actually distinguishes accumulo from the other databases like HBase. It's a natural extension point. You can drop jar files in that implement iterators and they are stuck into the iterator stack that's executing and traversing over your data. We use these to do spatiotemporal type predicates. So we do all of the D9IM topological predicates within iterators inside of the database. And we can also do analytics with these iterators. So I'm going to show you that. More detail. I mentioned it's a key value store. The key, though, is actually broken up into a five-tuple consisting of a row ID, a column family, a column qualifier. Visibility is the security marking within accumulo. Acumulo is very sensitive to security. And a timestamp to do versioning. You can have multiple versions of a piece of data. And the value is an uninterpreted blob of bytes. Data is distributed. It's a distributed database, obviously. So tables are broken into tablets. And tablets are sent out across all of the tablet servers. The tablet servers are processes that are running on all of the nodes in your cluster. And they are responsible for a single tablet or perhaps multiple tablets of a single table. So they know that they're specifically responsible for a subset of the entire key space. And if a query happens to hit that subset, then that tablet server will be queried, will be communicated with to satisfy that query. Acumulo has fantastic right performance. If you pre-split your table, you're essentially saying to each tablet server that you're responsible for this small subset. And as your data comes in in a streaming fashion, you assign it to a particular tablet server based on its key. And so you're getting this dispersed right capability. You're spinning multiple disks as quickly as you can. That's how you get really good right performance with Acumulo. Like these other column family databases, HBase and Cassandra, Acumulo is quite low level. You have to lay your data out according to the way that your access patterns dictate. So in a RDBMS, in a Postgres, you lay out your data in a tabular form and it's very careful to structure the data on disk for performant access. In Acumulo, you have to map your data to that lexicographic index which has implications for data layout. So I'm going to talk about that with GeoMesa. There are many libraries that you can use to simplify the process of development on top of Hadoop. On the left is a few libraries that are great for doing batch analytics. Some of these libraries like Cascading and Scalding and Pig, take your computation and in a sense compile it to a set of staged map-reduced jobs. Spark does something very similar but executes in its own execution environment for low latency purposes. On the right, you've got streaming analytics. The canonical example is Storm which is a directed acyclic graph representation of your computation. And then there's Spark as well which has a sort of a new capability now called Spark Streaming which I haven't had enough time to work with but it's pretty shiny and I'm looking forward to working with it. So let's talk a bit about GeoMesa and how we actually store spatiotemporal data and query it. There's three aspects to that that are critical for understanding it. One is we have to deal with this lexicographic index and to do that we use what's called space filling curves. That implies a physical data layout that has implications for performance. Finally we have to address query planning in this structure so that we can actually respond to queries with arbitrary polygons. So the problem is that we have multi-dimensional data. The dimensions of primary interest to this audience is lat-long and time but often we have dozens of other attributes as well. For lat-long and time, we have to map that into a single-dimension linear lexicographic index. To do that, we linearize the key space using GeoHashes. GeoHashes, I'm going to describe in the next set of slides, they're a form of space filling curve but they have some very nice properties that are recursive prefix trees. You get nice compression because often they share a prefix within a cumulo and there's tunable precision. You can add or remove bits to your representation of a geometry so that you can scale up to worldwide data sets or scale down to regional data sets and still use all of your clusters resources. So the way that, so GeoHashes are a Z curve. There's Hilbert curves, there's a handful of others space filling curves but basically they map a multi-dimensional space into a single dimension. So remember we've just got this lexicographic dimension which we can work with. A GeoHash is an interleaved bits, the interleaved bits of splitting along lat and long dimensions. So the first split is on longitude. On the left you get a zero, on the right you get a one. At the next level you split on latitude and you append a zero. If you're above the line, you append a one. If you're below the line and you keep going until you get down to the level of resolution that you care about. And the level of resolution that you care about is a function of the data boundaries of your problem domain. So if it's the world you might go to 25 bits of resolution but if it's a region, if it's the mid-Atlantic you might go to 35 or 40 bits of resolution. This interleaving of bits induces a linear walk through the space and the linear walk is lexicographic. So if you take that binary string we can base 32, it's actually not base 32 but it's similar to a base 32 encoding of that binary string and the lexicographic properties are such that it traverses the space as you see here. You can go to any level of resolution as well which is a nice property of geo-hashes. So how does this translate to laying out data within a cumulo in GeoMesa? As an example we're going to look at events in downtown San Francisco. So the first thing that we've done is we've gridded our space down to 25 bits of resolution which corresponds to five characters in a base 32 encoding. So we're going to look at one of these course resolution blocks of data. So the first thing that we have to do is we see our tablet surface on the right. The tablet surface is distributed across the CPUs that are distributed across the disks. We want to spin those disks in a optimal manner. So first thing we have to do is allocate a slice of space in a structured manner on those disks. Now what you see is that within that block that we care about, NQ8YY downtown San Francisco, we've allocated a slice on all of our tablet servers. So we're actually going to uniformly distribute the data to all of those tablet servers. And we do that by prefixing the data with a shard ID modulo, the number of tablet servers that you would like to get, the level of parallelism that you want to get. That's represented by coloring the dots in the map. So all of the green dots go to tablet server one, all of the yellow dots go to tablet server two and the red dots to tablet server three. And that happens within that level of resolution. But it's repeated for every one of these grid cells in a structured way. So we know if a query comes in that we have to hit these three tablet servers, but when we hit these three tablet servers, we can quickly jump to the slice that corresponds to NQ8YY. So in our case, we're spinning three disks, but we're spinning them in a structured way so that we quickly traverse our data. So how do we do query planning in this context? What I'm showing you here is a CQL, an OGC CQL query with three predicates. It's got a spatial predicate, the B-box query at the top, a temporal between predicate, a during sort of predicate, and an attribute predicate. And the idea of query planning in general is to minimize false positive disk reads. We don't want to traverse data that we don't have to and maximize true positive disk throughput. So we want to spin those disks, or as many disks as we can, to get the data off of disk and come back from the satisfaction of the predicate query. Since we have three attributes in our predicates, we've got space, time, and a attribute called tweet text, GMAs actually has secondary indexes on any of the attributes that you have in your data. So we have to choose the primary index that we care about that would reduce the cardinality of the results set the most. So if you're doing a Postgres explain on your query, you'll often see it chooses a particular index and then does a sequential scan across the results of that index and applies the predicate, the predicates that it didn't use as an index. Or it does two indexes and it does a bitmap intersection of the results of those sets. In our case, we're going to talk about the spatiotemporal aspect. So assume for now that in this query, we decide that the spatiotemporal predicates combined reduce the results set the most. So we're going to ignore the attribute query. We're going to actually apply that in parallel across our data. So the first thing that we have to do is we have to take our polygon and decompose it into the set of covering geo-hashes that correspond to the ranges that we have to scan in our accumulo database. We recursively iterate over the polygon using a priority queue where the priority is based on the distance from the center of the geo-hash that we're looking at to the center of our predicate polygon. And in this manner, we can optimally discover the covering geo-hashes and ignore any of the other geo-hashes that we don't have to traverse down into. So we get a set of geo-hashes at different resolutions, right? So you can see kind of in the center there that there's a fairly large geo-hash. That's at a lower resolution than some of the ones around the edges that have to cover the complexity of the border of the polygon. That corresponds to scan. So then we send the scan out to all of the different tablet servers and we send with the scanner the attribute filter. So we say, we know now that you only need to scan these small ranges of data and you have to apply this attribute filter which is tweet text like phosphor G in parallel on the server side. So that's the sequential scan part but it's against such a reduced subset of the data that it's much faster. And that's the general idea between within GeoMesa's spatiotemporal querying. So that's the background material. Now I'm going to jump into the, we're going to decompose and dissect three analytics and how they execute across all of those components of the Hadoop stack that I put up as an image before. The three analytics are density computations, streaming analytics for things like anomaly detection or tracking. And the final one is spatiotemporal event predictions. So starting with density computations, this is the minimal stack, a minimal use of components within the stack. We want to take dots on a map that have some information but not that much information and turn it into a heat map that has much more information. We already talked about how you might do that in a map reduced fashion. But what we can do is do that all within a cumulo. So I said that we have uniformly spread the data across all of our tablets but the data represents that single cell in all of the tablets. So what we do is we send out an iterator which we've stacked on top of this stack of iterators, the extension point of a cumulo that is traversing our data and it's initializing a sparse matrix. The sparse matrix for each tablet server covers the whole cell but it doesn't cover all the data in the cell. So the map task is initializing the sparse matrix. It's sent back as a sparse matrix which is a compressed form, a compressed representation of all this data and then within the client side all of those matrices are summed together. That's our associative operation and the result is a heat map. So request comes in, in this case it's via WMS and it requests a heat map via a styling parameter in the SLD portion of the request. Geo server is acting as the client of a cumulo in this case. It sends out requests to a cumulo to each tablet server that has a slice of the data that we care about. Each tablet server executes and computes a sparse matrix of the data that it knows about for that cell, sends it back to the client which aggregates it into a single representation and sends it out over the OGC request. So that's pretty simple. The second analytic that I wanted to talk about utilizes storm for streaming analytics. Some of the use cases are epidemiology, how diseases propagate around the world which is particularly apropos with the Ebola stuff that's happening now. Geofencing, you might want to put a virtual polygon around an area and see when things enter or leave the area of interest. Tracking problems already mentioned. One of the interesting things recently has been event detection in streams of data. There's a company called Jawbone that makes a FitBit-like health monitor and they had this really interesting analysis of the sleep patterns of their users after the Napa earthquake. They could see how the sleep pattern was disrupted as you went out from the epicenter. That's what's shown up on the top right there in that line chart and the URL is listed there as well. It's pretty interesting to go to. So you could take in a Twitter stream, you can monitor it, you can infer the sleep patterns or the disruptions of sleep patterns, you can cluster mentions for impact analysis and for potential rapid epicenter analysis for emergency resource allocation, those types of applications. The architecture stack looks like this. You have GeoServer sitting in front of a Kafka queue. You have a fire hose of data, Twitter's fire hose of data as an external source being published to Kafka topics. The storm topology that's running and represented by those bolts in between Accumulo and Kafka has a spout that's listening to the particular topics. A spout is Storm's vocabulary for something that pulls data into the computational topology. So it reads off the spout, it reads the messages and it sends it to this computation which might do filtering for messages about earthquakes and then do some sort of clustering like DB scan and one of the far right topologies. It writes to Accumulo, it also potentially reads from Accumulo to get static contextual information to improve the analytic and the result is written both from Accumulo and from Storm out through a topic that GeoServer's listening to. So what we've implemented within GeoServer is a data store that retrieves its data from Kafka and it only retrieves the last 30 minutes of data. So every time you hit a WMS or a WFS against that, you're going to get that 30 minute cache. So it's a real-time data source. So there's lots of applications of streaming analytics in this context and we use this sort of infrastructure and this set of components to do that. The last analytic that I want to talk about is spatiotemporal event prediction. The applications are real estate buying and selling patterns, again epidemiology, but we're going to work through a criminal incident prediction example. And this one's going to use traditional map reduce as its computational backbone. So the idea is that we're going to model a criminal's preferences for where they intend to commit a crime. And we're going to use spatial features as a proxy for the choice factors that they use when making these decisions. The underlying principle is that we're looking at crimes, economic crimes, not necessarily crimes of passion. Economic crimes have a rational foundation to them so we can model them and we can predict them. The example that I have up here explicitly is breaking and entering. So if you think about breaking and entering, there's some choices that you're going to make about where you might commit a breaking and entering crime. You're going to use factors like the nearest police station, the neighborhood demographics, and the distance to the nearest highway on ramp, lighting in the neighborhood of interest, and a whole host of other factors, all which are represented as vector features. So in order to do this analysis, we need to take those vector features. We need to take the locations of historical events, historical breaking and entering events, and determine which of those vector features have an impact, have a predictive impact on the activities. So the first thing that happens is this comes in as a WPS request. The inputs to the model are a list of features to consider and the historical events. And the first thing is, so this breaks down into a two-stage map produced job. The first thing I have to do is vector to raster transformation of those features. We have to say, given any site on my map, what is its relationship to a police station? And we compute a distance to that police station. So we parallelize over the different features. We have 50, 60, 100 different features. We send those out to map tasks within Hadoop. So the top task tracker might be working on the police station's problem. The middle one is working on the demographics problem and so forth. Each one of those requests the data out of GeoMesa and brings it back and computes a raster representation of that data. Those raster fields are then sent back to GeoServer, which is acting as the client again. GeoServer takes that and fuses it with the locations of historical events. And it does that using a statistical model. And then it needs, so now it's estimated a model. It knows the weights of different factors. And it needs to predict across the entire geospatial context where the most likely place is that an attack is going to occur, a criminal act is going to occur. So in order to do that, we have to apply this model which may have a 80-dimensional, you know, matrix representation to every discretized cell in our map. And that's an expensive operation. So we can parallelize that again by blocking out different portions of the map and sending each block to a different map task for execution, which is shown here. Each one is applying the model to a different region of space. And the aggregation, the reduction, is done in GeoServer. And the result is a threat surface, a breaking and entering threat surface in downtown San Francisco. So that concludes the talk that I was giving today. I've listed a bunch of references here. Hopefully this is available online. So if anybody's interested, you can go to any of these websites. And I'd take, happy to take any questions from anybody. Thank you.
|
The rapid growth of traditional and social media, sensors, and other key web technologies has led to an equally rapid increase in the collection of spatio-temporal data. Horizontally scalable solutions provide a technically feasible and affordable solution to this problem, allowing organizations to incrementally scale their hardware in tandem with data increases.GeoMesa is an open-source distributed, spatio-temporal database built on the Accumulo column-family store. Leveraging a novel spatio-temporal indexing scheme, GeoMesa enables efficient (E)CQL queries by parallelizing execution across a distributed cloud of compute and storage resources, while adhering to Accumulo's fine-grained security policies. GeoMesa integrates with Geotools to expose the distributed capabilities in a familiar API. Geoserver plugins also enable integration via OGC standard services to a much wider range of technologies and languages, such as Leaflet, Python, UDig, and QuantumGIS. In this presentation, Anthony Fox will discuss the design of spatio-temporal indexes in distributed "NoSQL" databases, the performance characteristics and tradeoffs of the GeoMesa index, and how it can be leveraged to scale compute-intensive spatial operations across very large data sources. This discussion will detail how GeoMesa distributes data uniformly across the cloud nodes to ensure maximum parallelization of queries, and other computations. Specific computationally intensive analytics include distributed heat map generation over time, nearest neighbor queries, and spatio-temporal event prediction. He will present common analytic workflows against spatial data expressed as batch map-reduce jobs, dynamic ECQL queries, and real-time Storm topologies. Using the Global Database of Events, Language, and Tone (GDELT) dataset as a working example source, Mr. Fox will demonstrate how a completely open-source architecture stack, including GeoMesa, enables ad-hoc and real-time analytics.This presentation will be of interest to data scientists, geospatial systems developers, DevOps engineers, and users of massive Spatio-Temporal datasets.
|
10.5446/31651 (DOI)
|
Cool. Good. How many people is this your first FOS 4G? Wow, that's awesome. That's cool. Welcome. It's been fun. It's a, I've been all the FOS 4Gs, all the way to the map, any of the map server user conferences. I'm not that old. Who here has been here to all the other FOS 4Gs? Anybody? Of course you are. Oh, you should just give this talk. I think you've seen this before. Cool. I'm going to go ahead and start. I think I have 30 seconds to use it wisely. So, my name is Andrew Turner. I'm currently the CTO of Esri RNDDC. I like just having a fully acronym title. I'm talking about some of the stuff we're doing there, particularly a lot of big data stuff and open sourced as well as other projects that we've done that kind of relate to this. And you probably ask yourself why is Esri here? And hopefully by the end of this, you'll actually understand it. It'll make a little more sense. But first, I want to talk a little bit kind of what's interesting, what's going on now in general about the big data world. Esri is in this book by Anthony Townsend about smart cities and it's a really good story in it. It's talking about the growth of cities. I live in DC now. It's been fascinating living in an urban environment seeing this happen. But during the end of the 19th century, the vast growth of how fast people were moving into cities. Now it's crossed over 50%. But prior to the 1840s and 1860s, at least in the US, there were fewer than 2 million people living in cities. Only 10% I think of the world or less were living in cities. But by 1920, there were over 50 million people living in cities. So it was a huge, huge boom of people moving into these cities in these urban environments and living in densely populated areas. People were immigrating and moving around a lot. It's hard to track where everyone was. So in the US, we have by government constitutional mandate a decadal census. Every 10 years, we have to go and count all the people, or at least as best measured we can. In the 1881, it was so many people were moving here into the US and into these cities that it took seven years to calculate all the results. So it took seven years when they only had three years left to start the next one, to start calculating it again. And by the, they estimated in the 1890 census, there were going to be too many people and it would take too long that they wouldn't be able to publish the results from the 1890 census until after the next census had taken place. So essentially, it was an unbounded problem. They couldn't, didn't know how to solve this. So people were building cities faster than we could ever count them and people were moving into them too fast. So a young enterprising census clerk, Herman Hollerith, saw this as an awesome opportunity and took the idea from Looms, where Looms had these cool things called punch cards in which you use them to program up how to actually weave rugs and shaws and clothing and things like that. And so I can do the same thing for tabulating. So you print out these punch cards in which punch cards had in it, you know, how old you were and how many people live in your household and what race you were and where you came from. He put it on, then on these little barbara pads, you pulled the handle and had these cups of mercury underneath. And these little bars that dipped into the mercury, and if it connected through the mercury because there was a hole in the punch card, it sent electrical current which moved a dial. And so you would essentially lay these cards down, pull the lever, a dial would move and they just keep doing that, keep doing that. It went up to 9,999 and then the dial would go back to zero. So they'd peer out here, write down the numbers, reset them all to zero and go. It was so effective that they went and by the 1890 census which they thought would take 10 to 12 to 15 years to calculate, within two months they'd already started publishing out the results from certain cities. They did the entire U.S. within two years. So it's kind of that beginning of this big data where more data you thought you could handle, you have to start automating it and processing it. So it's kind of inspirational for where we're going. Does anyone know who Herman Hollerith went on to become? Anybody? The tabulating machine company? They essentially became IBM. So this is how IBM got started with this little mercury cups, electrical currents and dials in 1880. So I'm from the Esri DC office. We're based there. We came out of GOIQ and GeoCommons. The idea is being based there local to the government and building tools that are actually going to be used to help solve these important problems. We're actually technically bad for geography company. We're actually based in Virginia. But hey, close enough. We do overlook DC and we're working there to really try and help and make tools more accessible and understandable to solve these problems. And a lot of them are coming open source and I'll explain why for big data in particular. Big data, again, is what's happening now with this explosion of data and information. How do we actually begin calculating meaningful ways to discover answers to problems before it's too late? Well the common concept with big data is you have these three V's and this is the kind of simple, overly simplistic view of it. You have huge volumes or it's moving too fast or you have a lot of different heterogeneous data types. And that's all true but it still is very, so very, you know, what do you feel about? Does it feel big? Does it not fit in your spreadsheet? And what people personally feel about it? But it's also starting looking at things like Internet of Things and where it's going to be going to. We have big data problems now but what happens when every single sensor of vehicle, your car already has tons of sensors in it itself, it has thousands of sensors, what happens? It starts publishing all that data out. Every single light post, street, corner, intersection, building, window starts transmitting this data, it's huge. So what's really happened is the fact that globally we now have a ubiquitous global network in which we can publish whatever the heck we want on to anybody in the world within milliseconds and the fact that hard drives and computers became insanely cheap in commodity. So what really became was the fact that while we have this ubiquitous network I can send you lots of small packets of information, it's actually still hard to move these huge volumes of data I want to capture. Really it became the data hoarders. I'm going to capture everything and I'll figure out later if it's useful to me at all. So really what it all kinds of means now is big data is when you should just stop moving the data. It's something that you capture, keep it at rest and then the idea is you can start throwing algorithms at that data because algorithms are very small. So again the thing about it is that before, you know, in typical GIS land, anyone here who's been doing really any calculations for a long time, you download your data and you run it locally or you even pull in your own database and it was on a server and you then process it, right? The difference is now the, that was good. You wrote code a lot against the data because the data were relatively small but now the data would take too long to move over the pipeline. You'd wait days just to get it to your server instead. You said the data are big so my functions are small. I'll move my functions to wherever the data reside. So that's the one kind of principle. The idea is that stop moving your data around and push your algorithms to that data and they'll talk about the tools that do that. And the other part is then open source and I think something everyone here likes and obviously it's probably our careers now, if not even our passion and our beliefs but what's interesting here specific to things like open data is the fact that this is a new kind of domain about how do we actually analyze these things using non-traditional methods. I usually liken it to thinking like Legos is by open source it means I can go and discover and try out new ideas that people never imagined before. So generally it's open source and the UNIX philosophy is I'm going to be lots of little modules that you can glue together to do things and pipe them together and I could never imagine. So now instead of doing it in one little command line how do I do that across vast numbers of machines and pipe it and play with it and try it and idea, see results in a few seconds and say I like that now run it forever. And I'll show you some examples of how that works. So that's really the power of open source is as any kind of tool builder you can't build the one tool anymore that's going to solve everybody's problem. Everybody's problem now is unique. The data are unique. The parsing is unique. It's heterogeneous. Volumes are different. Velocities are different. We have to enable the developers and the end users themselves as much as possible to put their own intelligence against that data. So I'm going to talk about three different types of big data stuff that we're thinking about and I think is kind of a framework for different methodologies and the tools we're providing to do this. So one is your traditional batch processing. It's taking what you would have done on your desktop and now just doing across lots of machines but doing it in a way that's I'm going to run the processing analysis and then I'm done. I'm going to take the answer. I'm going to visualize it or get a number and make a decision and make an action on it. And that's usually also called like map reduce is the one is a type of batch processing. It's pretty good. The problem is it can be very slow when you're waiting for it. You could have run in it, could have taken a data run and you don't know if your answer is right or wrong till after the process was run and you have to do that every single time. Stream processing has become much more interesting where you start having tens or hundreds of thousands of features per second in which you want to know when something interesting happen as that stream goes by and then you want to do alerting based on that. I don't want to watch the stream. I just want to know when it crosses a certain threshold. So it's a different kind of big data analysis. And then last is the search discovery. This idea is I don't even know unless you know what the shape of the needle I'm looking for in my haystack. I just know the general kind of problems I have. Let me know when any of these things cross over these thresholds. So essentially being able to just ask a number of questions and then letting me know whenever the answers happen to show up. So the framework that's actually kind of emerging has become a terminology to capture all these things together is this lambda architecture. What it is in these two sides is you have the left side is essentially the stream processing engine. And I'll go through a couple of tools that do this. Where you're watching the data as it streams by, you're running various aggregate statistics on it, you're looking for moving window averages and it's capturing kind of general alerting of what's going on. And when something's crossed you say, ah, something happened. There was more crime just happened in a neighborhood than I would have expected. That then kicks off a batch processing job saying how did this happen? What's happened over the last day? That's now I need to process and understand what this is. So it's applying these two things in synchronous to where the stream processing is just getting your general alerting. Here's something happened and then the batch processing helps you understand why it happened. So and then in the end you want to visualize it. I mostly just put it in kind of a mesmerizing view. But the idea that you can start doing these things like just watching all the data stream through, seeing are my algorithms kind of doing what they need to be doing and then alerting me. Like in this case is a tornado is about to form or hurricanes are about to get violent. And being able to visualize that in an ops dashboard or someone sitting and watching it and then tell me when a certain threshold is crossed. This is actually based on the, I forget his name, the Earth null, null school I guess, visualization that we've built it now in Canvas. Anyway, my point is that how do you make all these tools available so you can apply the unique knowledge against it? So we've really open sourced a number of tools that we call GIS tools for Hadoop. But really it's a set of tools that allow you to go and do these different kinds of batch, streaming, search and discovery and alerting mechanisms. So we're applying a lot of these tools against the common frameworks as well as building some out ourselves. So this is kind of a list of some of the things that we're working with and helping build out and you might recognize some of them if you haven't. I definitely recommend diving into and checking them out. Hadoop is came out of Yahoo originally as an open source map reduce framework. It's pretty well established. It's essentially synonymous with the last five years of big data mint doing Hadoop. That's not the only answer anymore, but it's still really powerful. So we've open sourced tools on top of Hadoop. WooKong is a really nice Ruby wrapper around it to process if you don't want to go and do the Java. I'll have some more examples of that. Pigeon is another kind of query interface on top of it. So if you want to write SQL like queries, HBase, Cassandra and Chemulo are three really nice big databases that we're helping spatially enable. Kafka and Storm are both stream processing engines that were, I'll show an example, actually an application example of using those for doing stream processing and we're geospatially enabling these processing engines. Elasticsearch already is pretty good and hopefully we can help out with that. It's essentially the new big data for search. It's where Lucene and Solar have now grown up to this Elasticsearch engine, which has some really advanced spatial capabilities in it for doing everything from search and just basic queries of data and features to it and the billions of records very easily to doing even alerting when certain thresholds are crossed. Tell me what's significant about this one area. It's all just actually built into the engine. And then Apache Spark is coming up to be kind of the new in-memory stream and micro, it's called MicroBatch. It's batch but in batch in tens or hundreds of milliseconds batch. So it looks like it's streaming but it's actually acts more like MapReduce which lets you do some interesting things. And again in 20 minutes I'm just going to kind of blow by these ideas. I'll have links at the end. You can follow up on. So the core of this is something that was kind of amazing for Esri and why we're here is we actually took one of our core Jools and we open sourced it. The Esri geometry API, she'll try to get a name for it, but it's a Java engine for doing spatial processing. It's JTS but under currently it's still, I believe unless someone's going to correct me, still the most open source geometry engine out there. JTS is under a copy left license so it's a bit viral. That's changing I believe which is awesome. But right now this is under the Apache license. So take it, use it, do whatever you want with it, contribute back, yay, don't have to use it in commercial products, go forth. And the reason of us open sourcing this was again this problem is we weren't in our tools going to build the end analysis solution for everybody. People were going to go and do that and apply their unique ideas and concepts against it. We wanted to enable that to really get to the best answers possible. So under the hood is things if you've used any other geometry engine, feel very familiar, but it's really full featured. It handles all, you know, numerous native geometry types, topological operations, relational combinations between these all built in native to the library. Import, export to different formats, shape file, WKB, GeoJSON, EsriJSON, WKT. So all that kind of handling. Just to show you what this looks like and how easy it is, I went and used wrapped in Ruby because writing Java on a presentation is the... So, but you can probably get the idea is that we load in the different Java libraries, we get a JSON factor in which I can parse it, parse GeoJSON and convert it into a native geometry object which I can then dump back out in another format or put in my spatial processing. I'll show you an example of that in a few slides. The idea is this kind of gives you that low level operations for handing topologies that you then need to... We'll keep wrapping up to do more higher level operations on. The library also has different validations to make sure you have good geometries, closed loops, and then just general other operations. So boundaries, buffers, clips, all your base level operations, and then even quad trees which becomes really important then obviously for doing large scale distributed, sharded spatial processing. So what this looks like is that again I can load up and I can build a quad tree and I can start just pushing objects into it and I have this index now that I can use in memory or serialize it out and then do queries against it. So these are a little piece. This is that the base of the Lego you need to start building things around it. And again you can use it, you can go and we have Ruby wrappers around it, I think we can do some Python wrappers around it, just make it easy to use in different languages and then it's very fast. What we've done is not everyone wants to go work that low level, they want to work at higher abstractions. So we've also open sourced and built in tools above that that let you do more familiar high level processing across these data. So at this lowest level you have Hadoop and this geometry API engine at the bottom, which you can go and build your own map reduce jobs, which is great if you know how to do that, but if you don't it can be a bit of a learning curve. So on top of that the hive is a framework that gives you a SQL like query against these. You can essentially write SQL and it turns those into map reduce jobs. We've extended hive to have spatial operations and queries against it. We wrap all that up and we call GIS tools for Hadoop, which is a lot of different samples and examples and best patterns. And then for ourselves obviously we pull it into ArcGIS if you just want to click a button in desktop. But the nice thing is while someone can push a button in desktop, other people can go under the hood into the libraries and get as crazy and custom as they want to. So what this does, I mentioned about quad trees and distributed sharded processing, just in a very quick picture about how this works is we essentially build spatial indices on different servers across the cluster. Things are when objects are come in they are pushed off to that high level spatial index in terms of what shard they're going to operate against and then once it gets to the machine it then does a finer grained sharding within that for inside its quadtree. And similarly then when requests come through for doing things like joins and intersects, it does a high level kind of saying here's your feature somewhere in New Mexico, go against this server or this index and then within that we'll find out what county or police district or neighborhood you're in. And that's the concept of reduce, of mapping it out and then reducing it is pushing out to the local machine where the data reside and then I'll do that locally in that machine and sharding it out. And so the distributed quadtree essentially is the technique that does that. So what this looks like at a high level as I mentioned high is in the end people are very good at writing SQL. It's a very common language. Hive essentially gives you SQL across these big data sets. So you know it's, you can read this pretty well, it's you know in this case I'm looking at the number of earthquakes by county and want to just have a calculation against that. So it's like you would count against a relational database but now it's handling across, could be across millions and billions of records and it goes distributed across however many machines you can choose to throw at it. So just for example of kind of what this looks like and what it was before is we actually did I think FAA locations of flights and we did count by counties and we had, so we had 14 million flight locations and we wanted to count those by county. It's just kind of a little silly example. The first time we did this a few months ago using our geometry engine and these tools on top of it, it took 13 minutes to run this operation. So not horrible, you know faster than a day, you know maybe I can still go run out and get a coffee but it's 14 million records so it's not bad. But then we realized we could do better. We could optimize things, we could optimize how we shard them out, we could optimize actually some of the spatial processing on the server itself. In the end it now takes about 56 seconds to calculate 14 million aggregate points which is not too shabby. And this is still back, this is not even, imagine streaming would be real time. This is saying I have a question, what does it look like? It's not even time for me to go and get a coffee. So sorry if that impacts your productivity but it's pretty cool to start seeing and this is, we just a version actually just got released this morning I heard from the team version 1.2 which has some more optimizations in it too. So we haven't run benchmarks against that yet but it could be a little faster and you know it'll be more interesting to start doing more than just points and polygons and get some more benchmarks around that. So that's something we're working on, we'd love to hear if you have use cases like that. Another example here is kind of interesting. This is a project for a automotive company in Japan and so all you can say about that you can kind of guess. They wanted to know about how to, where should they try and promote carpooling actually. They wanted to take the idea that people tend to live next to each other and might work in the same area, how can we connect them together, they should actually carpool and actually help in smog and traffic reduction. So it took 40 million points, vehicle track locations of where people are commuting, we looked at where they were in the morning within 15 minutes of each other to where they ended up at the same place within 15 minutes of each other within a 500 meter grid cells. And from that we then derived that down to finding where there actually are common carpools. There's 100 people that all could actually be collaborating because they're all starting in the same neighborhood and going to the same workplace and starting to do that. I think that actually took about 30 minutes to run. So again something we can improve but the idea is here is you can start answering some pretty interesting questions. To actually in the end, it's a map, really the answer isn't even a map, it's about just a list of addresses where you should promote carpooling. But the idea was obviously it was underneath it was a spatial question. Similarly we're using a lot of these tools for doing, we've actually talked a little bit more about this publicly, the port of Rotterdam wanted to know from all of the ships that come in and out of the different ports, you know, where is there the most traffic and congestion, where can they, you know, put in better signaling and things like that. So we took a year of AISs on MEDICA, essentially ship tracking, taking a year of all of the data of ships coming in and out and did some quick spatial aggregations to hex bins. So check that bingo card. But wanted to see where the congestions were over time and then the idea is now monitor this in real time. Does this change over time or not? Now they've answered it for the past. So just kind of saying that again this is that Lambda architecture where they've processed it, they at least know the pattern, now they're going to set up an alert, watch if this pattern shows up again and then kick off some more processing afterwards. So actually showing another real world example, thank you, Alarm. So where this also comes in, for example, in cities that wanted to say have audio detectors on the rooftops where they can say detect gunshots. So we're listening for lots of different sounds and streaming and when something crosses a certain decibel level, it then kicks off a batch processing job to triangulate from all the microphones where that gunshot probably was. So another example of this was something we've shown in the past, you might have seen it, was looking at, for example, social media tweets during a disaster. In this case this was Hurricane Sandy a few years ago in which we did a normalized aggregation of social media mentions of power outages compared to people tweeting in general in Manhattan, compared to people talking about the hurricane globally, compared to people on Twitter talking globally. And so what we did is we asked that question, we said, okay, whenever this thresholds crossed, people are talking more about power outages than I would expect, let me know. And as a disaster responder, I'm going to go off and worry about other things. And so a visualization can run and can do whatever heck it wants, I only want to know when it's important. So we're using Storm with a geo-enabled Storm to do the processing of these tweets. We then threw up the aggregate values as a web socket to the browser and visualized it. And then in a few seconds you should start seeing this, what happened is a threshold was crossed. I care about something just happened, I can now dive into the specific features that I can now cut out of the numerous features I had to find those specific ones, what did they say? A power transformer exploded, I can actually grab photos of the explosion outside people's windows, verify that. And then over time see how people are moving in response to that power outage. So by the next day people had to move north to Grand Central Station in northern Manhattan where it saw power so when it came to sending water and blankets they knew where to send them to where people are going to be, not where they had been where they were living. So that's a kind of real-time alerting. And just what that looks like, it just kind of benchmarks because numbers matter. Just what we looked at, this was a geo-enabled Storm process running on a Mac just for benchmarks. Just streaming it through is 10,000 features per second with parsing tweets was 6,000. And if we geo-joined them by grid cells it was about 5,000 tweets per second. We could do on a single Mac. So, and that's easily Storm is essentially if you haven't used Storm it's like Hadoop for streaming. It's what Twitter bought a company, open source it's what they use for their ad engine. So it can handle the volumes and we're just spatially enabling it to answer these location-based questions. So kind of going forward I was hoping to have a demo of this and I don't yet so I will soon and I'll blog about it. We've released something called ArcGIS Open Data which essentially we're giving away to every government in the world to make their data open and accessible via GeoJSON and other formats. We have 1,000 sites created globally is more showing up. We're hoping is that now with these big data tools people can go off and take all this amazing open government data and start answering meaningful questions around climate and disaster resilience, location impact of schools and poverty and health and different aspects and answer these important questions for society through open source tools and open data. So that's personally what I'm driving for and what we'll be doing with it over the next six months. So to kind of wrap up these are the URLs to check out so esri.github.io has all of our 300 open source projects you can go and explore and dive through and a lot of the ones I've shown here. GIS tools for Hadoop is the string to look for for the specific stuff I've talked about. And then one of our engineers in particular is a fairly prolific, awesome amazing engineer. He did a lot of the analyses I've shown here, Mansour Rod. He also blogs prolifically and all of every code he does is all in his GitHub project where he's done specific applications around these tools. So Thunderhead Explorer is his blog but I definitely recommend checking it out or MROD on GitHub. It definitely has a lot of other kinds of examples you might want to check out. So that's it for my super brief big open source data talk but thank you very much. I appreciate you being here and I'm open for any questions. Questions? You guys are already all downloading the tools right now? Be nice to the Wi-Fi. Cool. Well, I'm available afterwards if anyone wants to hang out and chat or we'll also have a booth up in the section or over coffee. So again, thanks very much and have a good afternoon. Bye.
|
We've gone to plaid. It is now easier to store any and all information that we can because it _might_ be useful later. Like a data hoarder, we would rather keep everything than throw any of it away. As a result, we now are knee-deep in bits that we are not quite sure are useful or meaningful. Fortunately, there is now a mature, and growing, family of open-source tools that make it straight-forward to organize, process and query all this data to find useful information. Hadoop has been synonymous with, and arguably responsible for, the rise of 'The Big Data'. But it's not your grandfather's mapreduce framework anymore (ok, in internet time). There are a number of open-source frameworks, tools, and techniques that are emerging that each provide a different speciality when managing and process fast, big, voracious data streams.As a Geo-community we understand the potential for location to be the common context through which we can combine disparate information. In large amounts of data with wide variety, location enables us to discover correlations that can be amazing insights that otherwise were lost when looking through our pre-defined and overly structured databases. And by using modern big data tools, we can now rapidly process queries which means we can experiment with more ideas in less time.This talk will share open-source projects that geo-enable these big data frameworks as well as use case examples of how they have been used to solve unique and interesting problems that would have taken forever to run or may not have even been possible.
|
10.5446/31652 (DOI)
|
Easy to look at. It doesn't require any additional data. However, the user has to actually look at the data itself. Okay, and go back and look at the data itself and try to, if you're looking for time, you'd have to look at the labels, or you could include colors. Okay, so this is also a static route map, and it's good for a printed publication because it's done in black and white. The limitation is the viewer has to interpret time and location just like on the other map. I don't know if you can see that very well, but you can see that there's a route along there, and you have to actually get in and read the labels and see what's going on. And there is also a key associated with this. I didn't put it on there, but there is a key. What's nice about this is it's black and white. Okay, so animation maps are like movies. Basically, you know, it's like going on YouTube, clicking play and watching. That's it. That's just an animation map. There are some that have controls where you can go back and forth, which is really nice. This is a track map of the HMS Beagle Voyage, and the color. This is a movie, and the color is denoting the temperature. And I'll just say that this was done with time manager, and the big limitation with time manager as well as with using Excel for parsing is the date limitations on the library. That's a huge problem. So basically, this is the dot that goes along the route and lights up. I won't play that one right now, but... So this is a light rail line through Portland. This is with leaflet, and it's a track animated. So basically, this is like if you had a GPS route, you could go and get this, and you can, you know, move your slider back and forth and see where you are over time. I'm pretty sure you've seen a lot of those over time. The limitations are the visualization implies that the travel is only one speed. I don't have variable speeds in there, which would be really nice, but I don't. So I'd have to make separate maps for variable speeds. So that's what this is. The train goes down along the track, and then each one of these black things is an actual stop, and the user can actually click on the location and get information. Okay, so I talked about sliders. You're going to see that that is the most popular component for time maps, and web visualizations are sliders. There's jQuery slider. There's these open layers sliders and other sliders. Dojo also has a slider, and Dojo is a really good piece of software as well. It's a little complicated, but it's very good. Okay, so this is not a network right now, but you could include the rivers in there, river center lines. The rivers are on the map, but they're really hard to see. And I think if you're going to do a flood map or anything having to do with rivers, you need to include a separate layer for rivers that's beyond the base map, because the base map that you download, these are all tile base maps, down plays the rivers. So you can see the water on there, but see how light it is? What I would think is center lines for rivers would make it much better. The thing nice about this is it shows, is it going? There it goes. You see this top part right here? It allows you to filter data, so that means I can have several different types of data along the stream. I can have high floods, low floods, and medium floods. This also includes duration. How long the floods last. And then you can click here to go to more detailed data, so you can have things linked together. And as you move across, you see things appear and disappear. So if you had the center lines on there, you would see that there. You could actually put this data on the river network itself, but in order to do that, you would need to slice the river at these locations from an end to point. And then that way you could have the portion of the river the size that you want it to be. It's either slicing it, in here you actually do need to slice it. This is very limited data. Inside of your GIS, you would have it as linear referencing. If you're not sure what linear referencing, it's locations along a trail, the mile points along the stream. And so you could have an event from a to. And you have event tables inside of your GIS. To put it into here, you have to export it out. And then you have to put it into a format that's usable inside of these libraries. These are JavaScript libraries and they prefer JSON format. Okay. So if you set it up inside of your GIS where you do linear referencing, then you could make some scripts to clip the data and export it out into JSON format on the server. And then have that load up as the same file name over and over again every day. And you wouldn't need to store that. It could just be replaced because you wouldn't need to store it because it's still inside your GIS. Okay. So one other thing, this is time map.js. One other really great thing about time map.js is it shows polygons. It shows rasters. And it shows path lines. It shows routes. It basically can show everything. It's still JSON data, but it can show everything. So it's a very versatile format. And it used to be that time map.js would only go inside of open layers. And now there is a way to get it inside of leaflet. So you can get it directly into leaflet, which means you can have all of these data sets inside of leaflet. You can have your path lines or your routes right inside of leaflet and use what's really great about this is not only can you click on things and include those links, but you have two levels of timelines. It's a really, really good timeline library. It's the best one I've seen. So this is path lines. This is path lines. Just give it a second. What time do I have to? So you can click on things and get information. Okay, so the second library that I found that's really, really good is leaflet. And then it's called Human Geo. It's data visualizations for anthropogenic data sets. But it's not limited to that. The reason why I think it's really good is because the proportional symbols are great. It includes those spark lines. What that is is it's a graph at a particular point. I didn't put that example on here because he has too much data. If you're going to include those graphs on a map, it's got to be very, very few data points or it's got to be inside the little blob because if it's on the entire map, you can't read it. But what's nice about this is this is a very famous visualization. You know, it's New Pollyons March, right? It's in the Tuft Book, Charles Maynard. What's nice about this is leaflet has another plug-in that allows you to show ends on the ends of lines. So you can show arrows, so you can show flows, which means you could have several line layers. And you can include a line layer with those ends to show the location, the flow, so that right now you have to actually click on the information dots to see when they were there. But if you added another layer with more lines, then you could see flow and you could color code it by time. Okay? I can show you what this looks like. This is a really wonderful library. See, it goes along. You hover over the dot and you get information. If you click, you get an actual table. And this is from the same library. This is a flow map. So it's rasterized lines. Why is that important? Rasterizing lines? Because you can do this kind of thing. You really wanted to be able to do a diffusion map with networks. That's what this is. This is a diffusion map. It's showing the locations of runners over time. Okay? This is flights to different locations. And this is also with that plug-in. And what's nice with this one is, again, if you click on each spot, you can get the actual data set or you hover your mouse and you get the legend information for each specific spot. And then it shows you where it's going to. And you can have many different layers under the layer thing up there. And that's what I would recommend is many different layers to get your point across. One layer for the planes, one layer for the routes, another layer perhaps for if you're trying to show time in a specific way. This is the Beagle map with open layers. My limitation with most of these is that these sliders are not smart. They're stupid. If you don't sort your data, it will not appear in the correct order. So you need to sort your data. And that one actually has a search so you can search. But it has to be in the time format that you have. If you rewrite a JavaScript function that allows the user to pick a time format from a calendar and from sliders, it would be much better because the time format is horrible on most of these. Really, really horrible. Yeah, I know it's time. Okay. So I'll just finish up with this one. This one is open layers. Again, this is a movie. It allows you to step through the movie. And this thing changes as it goes up around the hurricane track. And you see these are raster clouds coming through so you've got every single piece of information in here. What's nice about this one is it holds everything. What's horrible about it is the functions are, the controls aren't very good, but they're better than some of the others. So I'll stop there. So are there any questions? No? Okay. Thank you.
|
Maps are traditional means of presentation and tools for analysis of spatial information. The power of maps can be also put into service in analysis of spatio-temporal data, i.e. data about phenomena that change with time. Exploration of such data requires highly interactive and dynamic maps. Using geospatial open source software, various techniques for visualizing spatial temporal network change data and combinations of spatial temporal network, point and area data are evaluated. Linear referencing represents locations along routes, linear features with an established measurement system, using relative positions. It allows locating events along routes without segmenting it, and has been applied to manage linear features in transportation, utilities, along trail networks and stream networks. Linear referencing for events occurring along a network through time are visualized using both animations and interactive time line visualizations. Sliders are used to give the user manual control to step through the data, allowing them to explore the data presented in each time step. Categorized point events (i.e. traffic accident types, flood locations, etc.) appear at muItiple locations along the network. Color and size of symbols are used to denote these dynamic point event attribute changes and location changes. In addition, line segments are mapped using size and color to identify the changes occurring over time. Some of the combinations of changes evaluated include: attribute change (i.e. traffic accident type), spatial attribute change (i.e. flood boundaries), moving objects (i.e. traffic accidents), rate of change (i.e. fish survival by stream segment) and spatio-temporal aggregation (i.e. multiple fish releases by watershed). Some linear visualization techniques evaluated include: run maps and map and line chart visualization techniques similar to the famous Napoleon's retreat Minard visualization.
|
10.5446/31653 (DOI)
|
Well, thank you for staying for my talk. This morning I went to two great talks. One is about the toolmaking. The other one's for keeping things simple. I have some specific cases, what it can do for using tools, and what we should do for keeping things simple. There are some specific cases. Both are using polygons. And the kind of problems can be caused by having polygons in map compilation, updating, and integration. And I will give you some specific cases, which cause something I call the post-polygon stress disorder, PPSD. And if you stay with my talk, hopefully afterwards, you're going to be cured or won't have a PPSD anymore. So just a little bit of background on the cases. And then specific cases, and then our solutions. I'm a geologist by training. And we do a lot of geological mapping for the province of British Columbia in BC. And just a few examples. We do need polygons. We need polygons to capture our features. And we do need polygons to represent the final map products. And for the province of British Columbia, we got 100, 100 of these kind of maps covering the province. And just by the way, the size of British Columbia is the Washington, Oregon, and California combined. It's almost four times bigger than UK. And so over the years, we have been compiling and integrating those individual maps and coming with this single, integrated, seamless digital coverage for the province. Now the use case right now here is we've done our first mapping. Now we want to update one of the areas. So it kind of makes sense. You would do a cookie cut for this map area. Take a copy, cut it out. And the mapper or our geologist will take it to the field. Hopefully in a year or two years, he finished his mapping. And they updated the map for this area. Ideally, we can just drop it back to the provincial database seamlessly without any pain or any work. But it doesn't happen that way. So there's all kinds of things could happen along the map edge. But also even for updating the map within the area, there's all kinds of cases where when you use polygons to update, the kind of problem we'll have. There are many, many problems. But I will focus on just two. One is what we call shared boundaries. The other one is called edge matching. I remember in the early 1990s, the first GIS course I took, there's a lot of pages, pages, chapters on edge matching. And hopefully after my talk, you will find out the edge matching for me is a history. There's no more edge matching anymore. So for shared boundary, just some examples to show you the specific cases. This is a part of a geological map. We have unit A and unit B share a common boundary in between. But this doesn't have to be bedrock geology. It could be land use, could be a disaster, could be municipal boundaries, you name it. And what happened here is not only two lines share the same boundary for the two polygons. In this case, we also have a fault cut through here. So the contact between unit A and unit B is also a fault in the contact. So here, really, we have three, minimum three features occupied by the same space. Now, I went ahead of myself a little bit here. So when we need to update one of the features, so let's say the fault has been remapped and we know this fault is the boundary for unit A and unit B. So right away, you're going to find some problems here. Doesn't matter what you do. You can spend all your time manually trying to adjust the geometry for polygon A and for polygon B. Quite often, what you're going to find out is by the end of the day, you will have gaps. You will have overlaps along the boundary, both between the polygons and also between the polygons and the line work. So that's the first case. The second case is something called edge matching. So we could have map A, something we mapped earlier. And we mapped the adjacent area. We have map B. Obviously, you see some differences there in terms of geometry, but also the attributes. Ideally, we can get them resolve all the boundary issues, get the map merged seamlessly. It doesn't really happen that way often in the real world. So what we will have here along the boundary between the polygons, you're going to have gaps, overlaps, slivers, and the lines may not join, could have overlap, could have disjoint. And the attributes in terms of the map units, they may not be consistent across the border. So I have seen places where people purchase expensive tools and hire a team of GS technicians working on this day in, day out, weeks, months on trying to resolve those kind of problems. And when I thought about it, what's the results? There's people spending so much time doing that. And there's a low productivity. When you have your hands on your mouth for the whole day, you've got the injury to your wrist, to your shoulder. So it's not too far to get into the PTSD. So the big question here is, again, just relates to what I hear in the morning. You can spend all your time or spend your money to purchase to develop tools. But sometimes you have to ask a question. Do we need, can we avoid these problems in the first place? And so to try to avoid these kind of problems, we have to find out the cause of this problem. It's not too different in terms of mapping geology. What we do is we go to a point location. We define, we identify the boundary. And from lots of point locations, we join the dots, we form the line work. And out of the line work, eventually we're going to create this better geology, form polygons, color them, create a legend, and cartographic, lean hands, have them published. We actually started with points, lines. So the polygons is not there in the first place. So really, I think the polygon is the cause of the problem. And we should get rid of the polygons in map completion, map updating, editing, and also integration. So the solution, we kind of developed this idea. Again, perhaps nothing new. Follow what we always do. Really, in the back end, in the source of the data, what we really need to keep up maintaining is the line work and the points representing the geological units. So these are the only two things we need. So I put in this term called geologic framework data model. It doesn't have to be called that way. For lack of terminology, I know framework means a lot of different things to different people. But just for the sake, we need some name here. So we just call them like a GFD for short. So essentially, the lines can be geological contact. Could be faults. So in some other cases, could be the boundary for land use, land parcel, could be the boundary for the river, for the municipal, whatever. And the points are the center is representing some attributes describing the land use, land cover. So essentially, we just need these two types of geometries to represent our data. So by the time you need to create your final products, you can easily convert lines into created polygons from line work and populate the attributes from those points. So just to give you a quick, simple example. In the province of British Columbia, we have one million vertices to define the geology. So out of the one million vertices, we have hundreds of thousands of line work. And it takes us less than three minutes to create 32,000 polygons within post-GIS. So thinking from what you have on the left to the right, it's really, really simple process. It's really quick. It doesn't take long. Not like in the early 90s, when we formed thousands of polygons, we have to run these things over the weekend. So the idea, the framework data model with the lines on the only lines and the points also allow us to develop another process, which we call the anchoring mechanism. This is the process. We can totally avoid any problems in the edge matching. Let me explain some detail here. So the first one is data checking, checking out. So it's very similar, like you checking out a book from a library. So before our geologist had into the field, one of them will give us the study area as outlined by the dotted, the black dotted lines. So from the study area boundary, we're going to select not only the geology within the area, but we're going to use that one to select all the polygons that have something to do with this updating area. And from this extended context, we're going to form a buffer, a tight buffer. And we're going to use this buffer to select our framework data, which are lines and sentoys. So polygons are useful here. We need the polygons for the initial filtering. But once we did that filtering, we threw that away. So essentially, we needed this buffer to select everything within the area. So this is our first step. The next step is to, OK, before I went too much ahead, a simple example here. If you take the data out from here, take it out, and run some round trip, put into different GIS packages, do some round trip of a map projection, and then return the data back here. And if you don't have a precision model or some kind of a control, I can guarantee you the map you returned, the data like the framework data you returned for this area, they're not going to match what we had there before. So this is kind of like some kind of well-known and well- understood phenomenon, what we call the coordinated drifting. So essentially, if you take a piece of data, run through multiple processes, in terms of map projection, loading to different system, the data once come out of that process, all the coordinates will drift around. So unless you have some magic, otherwise, you can't avoid this. So let alone, you're going to do some editing. So how to control this kind of drifting? We borrowed some nautical terms, something called anchor line, road line, hook, and anchor point. So road line, R-O-D-E, actually, is a term describing the lines between the boat to the anchor. There's a little bit of description for each of the terms, and it's OK if there's a little bit too much work here. You will see the actual definition from some graphics later. So what do we have here on the database side? We can automatically anchor it. So the outermost line that the one showing up in red is something we're going to tag as anchor line. So any lines connected to this anchor line is called a road line. And here, that's where we have a node. And it becomes a hook on the anchor line. And the end of the green line is the anchor point. Make sense? Maybe I have just a few pointer here. So it's essentially all over what I have here. This is the hook on anchor line. So the red anchor line, road line, end of the road line, is the anchor point. And you will see why do we need to tag this from some real example. So let's say the next step is really taking all this data out. But before taking it out, we tag them. Just anchor line, anchor point, road line, whatever. And this is the package we're going to give to our mappers. So it could be the same kind of scenario. Let's say you need to update the disaster. So if you run this kind of process, there will be some additional data need to be taken out, package it, package it. And for the GIS technician to update. So in our use case, the map will be taken out and get updated by the geologist. This could take six months, a year, two years, depending on how big is the area. Sometimes it could go up to three years. So by the end of the update, we'll have a new map coming back. Again, we don't really care about the polygons they have. What we do care are the line work and the same choice representing the attributes. So before we return this thing back to the province, we're going to drop the anchor line they have. So the anchor line for them, it's really just some kind of boundary, some kind of limit. Those are the lines you don't want to touch. You don't want to modify. If you do need to modify, that means your mapping area extended. You need to come back. We can get you another checking out process. Just extend the area even further. So this is the one what we wanted. Now back to the provincial database. This could be your corporate database. So the first thing we are going to do here is that we think the corporate database is we're going to retire everything had been checked out. And then the next step is we're going to drop the updated, as you would expect, because of drifting. So sometimes could be some modification as well. The road line is supposedly connected to the anchor line. It could be some drifting causing disconnect or overlap, whatever. So this is where this process would work. Assuming the anchor line, the road line was initially connected to the anchor line at the point of showing as a hook. After drifting away, we can snap them back. So if you feel not uncomfortable, if you've got like thousands of these kind of cases, we can issue something like a marriage certificate. So we can, by using ID, by using whatever. So this road line is connected to this hook. It's going to go back to this place. Doesn't matter how many meters you drift away. Overshift, in most cases, depending on the mapping of the scale of mapping, in most cases the drifting might be by centimeters, by meters, or tens of meters, but only behinders. If you do, you do get any cases where something's being away for hundreds of meters. Maybe that's something different. Anyway, so either you can pair them up so you know for sure this particular anchor point is going to be snapped to this hook. There's something you can do it. Overshift just apply some simple geometric snap. And so after you have done that, everything's connected. You can form your new polygons for the whole area, or you can just form new polygons for the updated area. What I want to make a point here is, you know these lines, because it's in your corporate database, these lines, the anchor line has never been taken out to state. So that means all the polygons outside of this area are using this line at boundary. There's nothing happening. There's no modification. Versus all the polygons inside here, they're also using the provincial. So there's start here. There's no gaps, no overlap, no slivers along this area. So essentially, the only thing we need to do here is run some really, really simple geometric snap or using pairing process. So basically replacing the coordinates at the point of the hook on the anchor line. So essentially, the edge matching is fully automated. There's no human intervention here at all. So once you have done that, obviously, you can produce new polygons, put on the labels, you can run some kind of cartographic enhancement, produce the final products. So in our case, essentially, in the back end, everything's lines and the centrioles. So the polygons become a view of the data as a final product, as a product facing the end client. So the client doesn't really actually see everything in the back, like the centrioles. The whole process was developed in Postgres, PostGIS. So the process of checking out, anchoring, and integration, they are fully automated. Just a few messages. These problems can be totally avoided by not having polygons in the map compilation, updating, and the integration process. And also, the next message is, when you have the fund to purchase expensive tools, I would suggest take a good look, sometimes ask the hard question, do we really need this expensive tool? Do we really need to have a problem here? And so the framework data model, the anchoring process, they are fairly easy. They're really simple. You might be in any way, because right now we only deal with the lines, points. What can be more complex than that? And the whole thing can be developed, implemented, in the open source database. And for us, the PPSD is over. It's cured. We don't have it. Thank you very much. Thank you. Any questions? What happens if your fault line is within the red area? Yeah. And it goes outside of it, too. Right. So you just have two segments? Yes, in our framework database, everything is fully segmented. That means anywhere there is an intersection. The line is fully, it will be noted. So I can give you one example. Yeah, so like here, this is the fault, continuous, right? But the fault will be a broken here. Maybe that's not the best example. Let me see. I don't have anything else here. Do you have problems matching those segments? No, because those segments, they will be noted at the anchor line. So in this case, if I have to, if it happened to be here, these lines will, like this fault, will be cut into two pieces. It's not a continuous piece. Yeah. You kind of touched upon this a little bit earlier. Did you have some sort of a threshold or tolerance when you were doing this with x meters away from my hook, and I don't attach it? Or if I am this close, then I will attach it, or something like that? Yeah, the tolerance in our case, because our geological map, someone could be mapped at the 1 to 10,000 scale. But in general, they are mapping at 1 to 50,000 scale. So at the 1 to 50,000 scale, and in a map like this, even if you give like 10 meters, 20 meters, that would be fine. But what I found in the modern days, people using the kind of like GS tools after round trip, if they check the data out like this, usually they shouldn't see hundreds meters of drifting. But we did have a special case where we have a really large area, the map being taken out, and from Albers projected to UTM, and within UTM, even a single line used to be a line run to this, right? And the one going into UTM, and they cut somewhere in the middle. They didn't do anything, but this is a really large area. So when they return the map, there's a 200 meters drift. And so sometimes, let's say this line has a few thousand meters. If you cut a line in the middle, R has been reprojected. You didn't cut, you didn't get to cut outside, right? So this line could be modified. Especially, I think the real case here is you have a perfect straight line, and you cut a somewhere in the middle. That line, the moment you put a node in the middle, it's not going to be straight line anymore. And if you reproject it somewhere else, it could have all kind of followed. So usually, because our map actually started, the original compilation is in UTM, because it's a long, up-bidder, small area at one time. So it was always controlling UTM. And so once we merge the more together, give it more join, it's either in latitude, longitude, or in elders. And usually, we don't like to do any process, in terms of densification or simplification. If you have random, we will always take the map out randoms, for example, in UTM, because that's a little bit more true for what was originally compiled. So anyway, in short, to answer your question, because we're a map at 1 to 50,000 to 1 to 10,000, so we can accept meters or up to 20 meters, that kind of tolerance. But if you map at 1 to half a million, or 1 to 5,000, then those things need to be adjusted. Could you elaborate on how your centroid approach compares to classical topologies, where you define the points and then define a line that connects these points? And then a polygon is defined as the sum or the sequence of certain lines. And then you can't have diverging or overlaps anymore. OK. Yeah, understood. We did look at bringing some kind of topology way to manage our data. And everything we have looked at, it turned out to be so complex. So what do we have here? Actually, there's actually no topology per se. So all the lines, they're all together. But we do run, we have to make sure, anywhere there's an intersection, it must be noted. And if you really want to form polygons, that's a big, big, big space. If they're not noted, when you form polygons, all kinds of problems would occur. So we have run. So essentially, we did this earlier. This is some new maps coming in. The only work we have to process is to make sure the map coming in from our map first. They have to be fully noted at every possible intersection. And sometimes, if it's a little bit short by 2 meters or 2 centimeters, we need to run some process to detect those cases and to fix them up. So there are cases where the geologists will say, well, no, I don't want to connect that. Because I left a gap of 2 centimeters for reasons. Because I just don't want to break that into, cut that into. So in this case, we said about 2 centimeters. It's really easy to, if you get the map of some process, it could actually become overlap or crossing. So we'll say, well, is that really 2 centimeters? Can we make that about 2 meters? If it's 2 meters, it's almost guaranteed it's not going to cause problems. So I think the answer is, no, we don't have any topology. We try to keep these things really, really simple. And so that's another way. When we form polygons, the number of polygons and the number of centroids is a way to validate, do I have too many polygons or too few centroids? For this, if there are some differences there, we've got a problem. So essentially, every polygons we formed from this updated frame of data, you will have to have centroid to represent the attributes. If you have two centroids, you've got a problem. If you have one, you have no centroid, then something's missing. So that's a way to validate each other for what could happen there. But we really try to keep it really, really simple. Hello? Only a short remark. I think what you have presented here is more or less a re-vention of a topological data model, which was very popular in the 90s, 1980s. It was also used back in for the coverage and also is now used by grass. So if you're using such programs, I think it would be very the same. Because in these programs also, every polygon is defined by the lines in the centroid. Polygons are defined as centroid? Yeah, polygons, the attributes of polygons are always defined by the centroid. And geometries are defined by the lines, which are surrounding these centroids. So it's a topological data model, which is very common and is used by a lot of programs. OK. I could give you many, many more examples. If you have to merge two. Yeah, I know this. It's a process that was called cleaning in Arc Info. You might want Arc has an Arc node topology, right? And there's also a topology for Post-GIS, which has been trying to finally develop or do a little bit more of what's going on there. And I think anything else, either there's a system behind the scene that's working really hard to keep up what's going on here. What I can do is I have many more specific cases where if you are dealing with polygons, there's some more, some more cases where it's really hard to keep up with some of these cases. When we started with polygons, we even had polygons, small polygons sitting behind big polygons. So I'm going to find it. And also, small polygons along the line, you can't even see it because the polygons are so thin. It's like a pi by 2. Yeah, I know this. 400,0001 meter, running a close line. And it does a little bit of here, a little bit there. My only remark was this is not an invention by the PGD or how to call it, is a very old thing. It's a topological data model. That's the only point what I would do. And I'm also a geologist. And I know the geologists. I'm sure if geologists are going into the field, they want it exactly to edit the green lines. Right. Yeah, that's what it is. Well, essentially, the way the reason, you're right. Part of the reasons we proposed this, we developed this process is, why you give a piece of data to a geologist. You can't assume he's not going to bring your database with you. You will take a piece of data to the field, work on whatever GIS, again, we can't dictate which GIS tools you have to use. So essentially, you have to give him a separate data. It's easy for him to manipulate. And once it's done, coming back to us, it's easy for us, because we might hire a student to work on the data, to do some cleaning up work. And any data that's beyond the celebrity complex, I know that you can find tools to manage all the celebrity complex. What happened to me is I have seen so many, so many different cases which could pop up. You know, that's one side, I was finding the small polygons hiding behind. While you visually can see it, there are some small ones, they're so small, it doesn't matter how much you zoom in. You can see it, right? It's 0, 0, 1, 0 on one side, 0, 0 on the other side. So visually can see it. And those are just the type of things we're trying to avoid in the beginning. But I can talk to you about what we'll find out of some other, if there's some other good technology, topological suite out of there, which we can use anyway. Maybe. Yeah. Scale is an issue, right? The scale at which these mapping is occurring is an issue, not just the projection. I mean, you can have it down the way down, or you can have broad scale. Well, because this is a provincial repository for all the geology. So essentially, we're trying to accommodate the map at different mapping scale. So it's a single map. This is not a final product. It's an integrated repository, all the provincial geology. So we could have area mapped at a quarter million, half a million, versus some area mapped at really detailed, because we have some good mineral potential. Someone could be mapping at the 1 to 10,000, so versus in your adjacent area, it's mapped at a quarter million. So the difference could be huge. That's why we're not only having geometric data boundary problem, but also we have geological boundary problem. We could have a border. We think here you have all kinds of details beyond there. There's no detail. So you know something's not right. So we have to create some boundary, geological boundary, called a data boundary. It's not real. It's just the limit of mapping. So we just map to here. We know the geology. We think this border beyond we don't know. But we have to respect what happened historical, because the area has not been updated. That's just the case. We have to leave that away. So your database, even though you have multiple areas of different scales, your database knows the scale at which it was mapped. Yeah, we do have metadata, keep track of who did the mapping, when, what was the scale. Yeah, we do have some details. Some details, anyway. Well, if no further question, thank you very much for attending. Thanks. Thank you.
|
Polygons are great to have in digital maps, much like a canvas that we can render with beautiful colours. It is common that polygon boundaries are shared by linear features (e.g., municipalities divided by a river or a road). If polygons are used as part of the base to edit, update, and integrate digital maps, we have to reconcile the geometric differences among the shared boundaries and fix topological problems in edge matching. For many years we felt blessed that commercial software tools are available to reconcile shared boundaries, and to detect and fix topological problems. However, if wrestling with polygons leaves you feeling buried in slivers, discontinuities, gaps, and overlaps, you've got Post-Polygon Stress Disorder (PPSD). PostgreSQL/PostGIS presented British Columbia Geological Survey an opportunity to identify the causes of PPSD. As a result, we have developed a geologic framework data model and implemented an anchoring mechanism in PostGIS to simplify the process of editing, updating, and integrating digital geological maps. We have dispensed with polygons and eliminated the problems from shared boundaries and edge matching.Healing to PPSD is available in this poster:http://www.empr.gov.bc.ca/Mining/Geoscience/PublicationsCatalogue/GeoFiles/Pages/2014-9.aspx.
|
10.5446/31657 (DOI)
|
I'm Alex Mandel. I just finished, like a week and a half ago, my PhD in geography at UC Davis. Thanks. What you're going to see here is one of the chapters actually from my dissertation. So if I don't cover anything in depth enough for you and you really want to know what I did, you're welcome to go read that. I've been on the OSG-Alive contributor group since probably one of the earliest versions, like 2009. You'll see later. I think the first time we gave out OSG-Alive at a Phosphor G conference was in Australia. Were you on that committee? Were you working with Lisa Soft then? Yeah, I've had it a long time. Yeah, yeah. It had a strange name back then. All right. So the big purpose of this talk is if you work on an open source project, you're often asking the questions about who uses your project, why they use it, where they use it, and are the reasons why they can't use it. Why don't we have more people using it when it seems like the obvious solution for so many things? And so what I'm looking at with OSG-Alive is knowledge diffusion, which is kind of bringing awareness to people. So a conference like this is a method of knowledge diffusion. All of you people have come here now and you're going to hear about some stuff from me, which you may or may not choose to adopt. So that's the second thing on the slide is if you actually use something then in the world of thinking about this, that's called adoption. But just being aware of something is, you know, the first step. You can't obviously adopt something until you know about it. And once you know about it, then there's this whole thing of, is it appropriate for what I need to do? Do I understand how to use it? There's a whole bunch of other things that you need to go through before you actually decide that indeed I'm going to use this tool. And so I decided to study these parameters in regards to OSG-Alive. And OSG-Alive, for those who don't know about it, it's a project that was created by the Marketing Outreach Committee of OSG-O years ago. And it's specifically intended for demonstration and education purposes. So it's a live operating system that you can run from a DVD or a USB stick or a virtual machine, which is kind of the disjure thing these days. And it lets you try out almost anything you can think of that is open source and geospatial. Already installed, comes with data, already preloaded, comes with a short tutorial on how to get started. So it's trying to get people over that initial hump of, I have no idea how to install grass. Any Windows users ever try to install grass like five years ago? It wasn't easy, was it? Right? Things have improved now, but this is kind of trying to shortcut that so that you don't have to go through learning to install, just to decide if you want to even try something. You can just start with this and go from there. And if you want to do installations, then you can go seeking out additional knowledge on how to do things like installation. So you can see this is the version that we made and released for this conference. I believe you will be able to get a USB stick loaded with it from the OSGOO booth at some point later in this week. There was some technical difficulties with the USB supplier. And so you might look on here and be like, oh, I've tried a few of those things. And then you might look and see, oh, but there's like 30 more things I've never even heard of. And there's talks on most of them at this conference. It's kind of a way to explore the world of phosphor G and try out things that maybe aren't even relevant to what you do on a normal basis. But who knows in a couple years might be relevant to the kind of work you do. Okay, so this is a bunch of charts about the history of OSGOO Live. And it's really hard to read the bottom graph. But basically this is charting all of our releases and showing, since we have different types, the top graph is showing the size difference between virtual machines. And then we have two different kinds of DVDs that you can download. One comes with Windows and Mac installers in addition to the bootable operating system. And you can see over the years that there's some limits that we've had to stay under. And we keep adding more and more stuff even though we have to stay under those limits. And aside from that, it's actually been fairly steady after the initial first years. And the second chart shows the downloads we've had over the years. The early data is a little muddled because we were using multiple mirrors and we added and subtracted mirrors over time and we didn't keep the logs pooled in all in one place. So if you're using multiple download places, try to pull all your logs as you do it. It's a lesson we learned. We're now using source forage for all our downloads. So the last two releases on year six and 6.5, which is what I really analyzed, it was really easy to keep track of the numbers because there was only one place to go to find out all the downloads from all of the source forage mirrors. So these days where the top one there is, I think about 23,000 downloads. And it's kind of modulating a bit. I think our seven release had something like 30,000 downloads, but our 7.9 release only had about 20,000 downloads. And I haven't looked yet about if it has to do with how much time there is in between releases. So we tried to do about six months because we're trying to do a new whole version, every phosphor G. And then the bottom chart shows the coders and in the line that you probably can't see, it's yellow on the screen, the translators and how that's changed over time. And so the one thing I'll point out is that we didn't get any translators until here. And then when we hit just before 6.5, we actually ended up with more translators than people who were working on the installation scripts for configuring the software. The blue line also happens to include people who wrote the English version of the documentation. So it's not purely coding, but we call it contributors and translators. These are a couple of maps, really washed out, that show the downloads for 6.5 and 6.5 combined. And there are two different maps here. The top one is just purely by number of downloads per country. So you can see that the United States had the most downloads of any country. But the second map shows you dividing that by the number of people in that country. And so you can see that it's not the same thing. Having the most number of downloads is not the same thing as having, you know, the largest open source community when you're going for relative percentage of the country. The hot spot in this one you'll actually see in the next page where I list them out. So here are the top countries by number of downloads versus the top countries by percent of population downloading. And there's some things that you probably wouldn't expect on the percent of population downloading. But I also heard that Kate Verde, I think someone said there are three people attending this conference from there. So clearly they're into open source there. That is an open source hot spot of some sort. And you wouldn't have known that if you had just looked at regular downloads. And then to get into even more crazy details, what operating system people use varies highly by where they are. And I don't have any explanation as for why that is. I can merely describe it in this case. And so you can see that in the bottom here on the Linux side, there are some countries that you wouldn't necessarily expect to have high Linux usage. And they may not have high Linux usage for their whole population, but the people who are into OSG alive happen to all use Linux in Tanzania. And then, you know, on the other side they're looking at Mac. Apparently people in Singapore really like their Macs. So there's some interesting things here about when you're thinking about who your audience is for a project. And, you know, who are you targeting? This is some useful information you probably didn't have, but you could get out of your download information if you're keeping track of your logs. So I went a little further in the analysis, did some statistical tests. For those wondering, it's marked what I did. And so you can see for OSG alive downloaders in general, Windows is still dominant as expected, but not as dominant as the general internet-going computers of the world. Mac is actually solidly exactly what you would expect for Mac. So the exact same percentage or almost the exact same percentage of Mac users that are out there in the world is almost the exact same percentage of the downloaders of OSG alive that happen to use Mac. The next table down shows an interesting shift, which is that for Windows and Linux and the other category, which means they couldn't figure out what operating system it was, the full ISO, which is the one that contains both the live operating system plus the Windows and Mac installers, is the most popular, except on a Mac. On a Mac, the virtual machine is the most popular. And I have some theories about Mac hardware is really good, and so running a virtual machine is kind of no cost. You can actually run almost a full-speed desktop inside of a window, so why not? There's a few other things that might be that the bootable USB sticks don't really work on Mac, is one potential reason for it. You guys might have some other ideas, but it's really interesting that clearly they've picked up on it without us having to tell them that that was the case. Then the bottom is showing the variation. So, you know, at the top, you just sort of see the average for everything. At the bottom here, every little dot is a different country, and so where there is large clusters of dots is where you get the boxes drawn. So you can see that the average, the bar lines you've got here, these are the average percentage over all the countries. But you can see that, you know, overall, it's only about a little less than 10% Linux downloads per country, but there's a whole bunch of countries, at least 10 or more countries, that have 50% or more of their downloads are Linux users. So then I started to look at the geographic distribution of who is a contributor and a translator. So a contributor is they wrote some installation scripts or they helped put together some data sets, or they wrote the English documentation. Translators, they translated it from English into something else. And so you can see there's some patterns in this. Western Europe seems to have a decent amount of everything. Not surprising. You'll probably meet a lot of people at this conference who are from Western Europe. People in North America don't seem to do much translating. Also, not a terrible surprise because our education system doesn't really stress second languages all that much. And you can see that there actually is a fair amount of distribution. There's a time axis here going from the earliest, it's by version, so the earliest is at the bottom. You can see our growth over time and the balancing and when Asia and started getting interested in doing translations in, you know, six, they really picked up in six and 6.5, and South America also picked up then. And for those interested in the statistics, you can actually run some statistics on this. And there is a strong correlation between having either contributors or translators, I just combined them, having someone who is in one of those categories does correlate with there being more downloads of OSG alive in that country and region. I actually did an analysis by country, even though I'm showing you regions here. So local matters, if you have a local chapter, that seems to influence that you are more likely to download things. So OSGO having more local chapters, probably an important thing for OSGO to think about in the future if we want more people to be using OSGO products. All right, now getting into the meteor topic. There are all sorts of things that could be impeding the usage of software. These are some of the ones that I thought about a little bit for this analysis. You guys can probably think of a lot of other ones. I lump them into three categories roughly, economic, technical, and sociocultural. But the point of this diagram is that there are a bunch that overlap in ways that you wouldn't necessarily expect or in obvious ways. One of the easiest ones is training time. It's not enough to say that training time is a technical thing because obviously you need to learn how to use the software in order to adopt it. But training time actually costs money. It's not a free thing. You know, you either have to go to school for it or you have to have work time to do it or you have to spend your spare time doing it. And you can only spend your spare time doing it if you make enough money that you actually have spare time, right? So in the next part of the analysis that I did, I tried to find some measurements for a few of these things to test which of them were actually barriers or which of them were more important barriers to people downloading. So here's a list of the variables that I pulled down and where I got them from. And so you can see I have a bunch of different things in here that are measuring internet speed, what kind of internet speed you have access to. And I consider internet speed to be a technical barrier that's hardware related, but also an economic barrier because you have to be able to afford high speed internet. Downloading a 4 plus gig file is pretty hefty on a average, I think the world average is about 3.5 megabytes per second. It's two and a half hours to download the ISOs. So that's quite sizable. It's quite a chunk of time. And that assumes you have a reliable connection. You have interminute connection. You can see some people could take hours, days. And I've got a little bit of more directs, economic rankings, income rankings. And then the one at the bottom here is really interesting. I use this as a social, cultural democracy index is a ranking of how democratic a government is. So this is nation governments. And zero is completely autocratic and 10 is 100% democratic. And there are no 100%. There's some in the high nines, northern European countries. So when you take all of those and you put them, you can put them into an analysis so you get some clarification. The economic and income ones were categorical. They're pretty broad categories. So they're somewhat useful, but obviously not as useful as having pure numerical. All the other data was purely numerical. And I did a regression type analysis. So if you know about linear regressions, this is kind of like it. But there's a problem where all of the variables I was picking are correlated to each other, not just to the number of downloads. And so you have to come up with some ways to work around that. And one of the ways is to use a machine learning algorithm called random forests. I just happen to show you this is the R code I actually ran to do the random forests. It's a decision tree that helps weed out basically what's important and what isn't important. And it uses regression underneath as the principle that it uses to identify that. But by doing lots of repeated tests and by dropping variables here and there, it can kind of really parse out. And the results are way easier to understand than that actually is. And these are the results. And so what the first chart over on your guys' left is showing is that the democracy index was the best indicator in terms of a correlation. I'll clarify, not a causation. So the government isn't necessarily causing the number of downloads, but a type of government highly correlates with the number of downloads. And after that, basically anything to the right of the dotted red line was important enough to consider important. Everything else was negligible. You couldn't tell them apart. So democracy index, then income, and then ITU broadband, which there are a bunch of different broadband measures. ITU broadband was specifically the one that says broadband is something faster than 256K. As opposed to the other measures, what says broadband is faster than four. And then, so that first chart is kind of indicating that sociocultural is actually the biggest barrier to downloading. And then we were kind of curious about what happens if you take that out, what is then important? And once you take that out, then income came out as important. So what we're looking at is most of us tend to think about these things as being technical issues. Oh, they need more training material. They need more training time. It may actually be some other factors like business practices or government funding or, you know, your company allowing you to do training. It may actually be a bigger impediment to adopting any new software, but in this case I'm talking about trying OSGL live. And so those four important ones that I mentioned, the Democracy Index is the first chart there. You can kind of see the blue line on that one is sort of a moving average. And so it shows you that anything below six is kind of all the same. But once you get to six, there's that inflection point. So once you pass a certain amount of the Democracy Index of being a certain amount of Democratic Governance, you start increasing the number of downloads you get. The second chart, which is the box plot there, is the income grouping. And my read of this is that the income category of one, which was basically high income OECD members, if you're in that category, you tend to have a lot more downloads than anybody else. Once you get past that, you can see the black, middle, fat bars that are sort of the averages. They're all kind of the same. So those really, you can't really tell those apart all that much. So it's kind of like you're either in category one or everything else is pretty much the same, except for maybe the category five on the end, which is a country I wouldn't expect to even have computer infrastructure for downloading in a lot of cases. Then if you look at something, there's some other interesting stuff in there, like this chart down here in the bottom corner, which is the downloads by the peak speed. It kind of shows that you can see that you're getting increasing downloads up until a point, and then you kind of peter off. So basically, once you get past 25 megabytes per second in terms of your internet speed, it's fast enough. It doesn't matter. It doesn't affect who downloads. I'd like to try and find where the exact threshold is of what's the minimum speed that people need in order to download something, and that's going to vary highly for other projects, because OSG alive is huge in terms of size compared to most of the other projects that anybody here would be talking about. But this analysis can be repeated on any other project. That's kind of the point of this talk, is I happen to have studied this in OSG Alive's context, but I think it's an important analysis that we do on other projects, especially when you want to see who your community is and to try and figure out what you could do, what kind of incentives would help spread your software to more places. This is a summary of the results. Mac users like virtual machines, OSG Alive is popular with Linux users. That one's kind of an obvious one. Having participants in your country corresponds with downloads. So that was the thing I was saying were local chapters and local language groups matter. Culture is a big barrier, and I don't know if we address it enough as developers, because we're usually trying to avoid social cultural issues, I think, as developers. And despite that, there are still, of course, physical and technical issues once you eliminate some of the cultural blockers. And then some of the important things that came out when I was trying to think about what really matters is that there's been some discussion that the ability to trial everything and try it as much as you want for as long as you want is a huge win for open source. The other thing that I was reading about recently is that the ability to reinvent, which is a core principle of open source, actually matters a lot, even if there aren't a lot of coders, because reinvention also implies that people can adopt software to meet their needs. So they don't necessarily use it how it comes out of the box, which is also a huge part of open source, is that you can change it and use it however you want, and we don't care how you use it, right? Translation is an interesting one. I don't know if translation actually matters or not. I just know that countries that had translators used it more. So there's a few things here that I think could be explored more in depth, and really the only way that I can think of to get at these is doing survey questionnaires, which has a little bit of bias in it obviously because the people who have positive experiences with OSGO Live are more likely to answer a survey, and I'll have to tackle that bridge when I get to it. I'm not quite sure how I'm going to deal with that. And then I'm interested in trying to figure out how fast the internet is good enough, you know, since our community relies on the internet heavily for the version repositories and email communication and IRC and the websites with the tutorials, having good internet access is obviously a key resource for knowledge diffusion in the open source world. We don't just send print manuals to places and we don't have salespeople who take materials to other countries and sit down with people, especially at educational institutions, and convince them to use the software. And then I think there's room to test a lot more specific data. So, you know, actual household income, there actually is some data out there on English proficiency since a large part of the computer world is written in English, and it's been suggested that higher education might be a precursor technology or knowledge that you have to have in order to move into the more technical world. And then, of course, now that I only analyzed up until about a year and a half ago, there's a lot more data that we can start looking at change over time and seeing if there's patterns in the geographic distribution versus time distribution. So I want to thank OSGEO Live. And if you guys want slides or you want the database and the R code and the Python code and all the stuff that I used for this project, I've got it up on GitHub. I'll put up the chapter for my dissertation with it so you guys can see what I was doing. And hopefully you guys can reuse that in some way or convince me to do it for your project if you want. And I'm open for questions. Thank you. I think there might be a couple of different explanations. One is they could be downloading it to give to a friend. Since they are downloading the full ISO that has Windows and Mac installers that seems a little at odds with you running a Linux machine, why would you need that? But the other thing is, like, for me, in my personal experience, I use virtual machines for development environments. And so having a pre-made development environment is something you can just work with and experiment with pretty easily. And you don't have to install it to your system, especially, you know, OSGEO Live has desktop stuff and server stuff. You don't necessarily want all the server stuff, which is more than half of it, installed on your desktop or your laptop, running on all sorts of ports all the time. That's kind of overkill and it eats up resource and that kind of stuff. So I think it's about the, you get a contained environment for testing ideas and it's a quick and easy. Once you've got the virtual machine, just make a copy of it, boot it up, play with it, kill it, make a new one. So, you know, it's the same kind of people who are into Vagrant and who are running, you know, huge virtual stacks on the cloud, that sort of stuff. And I think is a large part of why their Linux users are into it. It's getting to the real question here. would be strange to make this statement,vie the term and it stands snippet, so I welcome Yes. Yes. So, for the download numbers, I purely used what comes out of Sourceforge's API. So, all the stuff that you can access by clicking through and seeing how many downloads. They have different views where you can look at by country or by operating system or by that. So, I wrote a little, I think it's a Python script for that one that pulls all that stuff down for a given project for specific folders and then puts it into an SQL database that you can then query out stuff in R. I did not really find projects delving into it. It's, I may not be finding the right terms. I may not be finding the right kinds of researchers. I think there are projects who have analyzed their own downloads, but I don't think they've gone into the depth I've gone into into looking into, like, the barrier analysis is something that I don't think any of the projects have done. I think the sort of looking at operating system and country and that kind of stuff, I suspect that there are projects that have done that, but it's not published literature. It's probably on a project website, like, by the way, we have this many downloads. I know QJS tracks downloads in Windows versus Mac and Linux, but it's not all in one place and it's not analyzed in some way to see if it's statistically significant or not. Release candidates? Very low. Usually it's the dev team using the release candidates or the translators. I, we don't really have general people using the release candidates. They go by really quick when we're just trying to build the final. I think the majority of people download the final. We have download numbers on it in some way and they're small. I think the only way to actually get into that is that follow-up survey that I really, I wanted to do for this but didn't have time to do for this because you really have to get in and ask people, did you use it? What did you adopt or why didn't you adopt? And, you know, find out things about do they work for a government agency or do they work for an educational institution or do they work for themselves? Because all of those things are then going to give you a robust amount of data to really look into why they're making the choices they did. But I also think OSGL Live isn't necessarily the best software to do that analysis on because adopting OSGL Live is, you tried it. You know, more end-user things like if we did a survey like that for QJS users, I think we could probably learn a lot. And since there are a lot of QJS users, we might get a pretty robust response. Thank you very much.
|
OSGeo-Live is a Linux distribution, available in virtual machine, bootable DVD, or bootable USB formats, containing a curated collection of the latest and best Free and Open Source Geospatial (FOSS4G) applications. This talk investigates the correlations between worldwide download distribution, and community participation against indicators of economic, technical knowledge and socio-cultural barriers to geospatial technology and FOSS adoption. Better understanding the barriers of technology transfer are important to the outreach efforts of the FOSS4G community, and understanding the market development potential of FOSS4G around the world.Results of an analysis of the OSGeo-Live community will be shown but the techniques discussed can be applied to any software project.
|
10.5446/31660 (DOI)
|
All right, hi everybody. Thanks for coming. I feel very honored to be here presenting at the Phosphor G International Conference. I feel also very honored to kind of introduce this concept of geodesign to the brains of the Phosphor G community. So I've been working kind of with geodesign for several years, maybe even before I even heard the word geodesign. Maybe even you have been working with geodesign for that long as well. So what's the difference between simply GIS and analysis and geodesign? So hopefully I can kind of explain that to you. I can propose kind of where we're at. I can propose some solutions that we may need. And I hope to just kind of explain like how geodesign can benefit greatly from open source geospatial for a number of reasons. So the other day I was at the chiropractor actually and the Nature Conservancy magazine just happened to be in there and I'm skimming through, skimming through and I find this article about birds in the Central Valley of California and how the Nature Conservancy is using data. They're gathering e-bird sightings from the Cornell Lab of Ornithology. They're overlaying that with water and like what other kinds of like wetland habitat data that they have. And they're prioritizing where they want to buy farmland, I should say rent farmland from farmers in the Sacramento Valley, so that they can delay when they flood the, when they chop up those fields, when they flood those fields and basically when they get their agriculture back online so that it, they can flood the fields and have, provide habitat for birds. So what they do is they get their data, they make a design, they get a plan together, they go out and they, they give some money to the farmers and they say hold off on planting your crops for another month because we got all these birds coming through. You just have nine million acres of habitat in the Central Valley of California down to less than half a million acres. So now they're getting this up and up, getting the farmers to help out. So this is one of the best examples of geodesign I can think of and I just happened upon this literally last week. So again the data, it falls, we know now that we've got all this technology, geospatial and other, to do our analysis. Where geodesign comes in is where we do what we call designing and evaluation. We make a map. This is a great sketch or at least a diagram that a friend of mine created back at Esri. All right, I used to work at Esri and I don't work there anymore. I actually work for Denver Public Schools and I do some consulting with the city of Boulder which we'll talk about afterwards. So this kind of is a diagram that I'm going to play around with. The whole concept here, I'm going to flip around the colors, I'm going to kind of use to illustrate different concepts of geodesign. We make a map. We've done this before. We have the technology, we know about this. We do spatial analysis, we do overlay analysis, we do raster, we do vector, we do all this kinds of stuff. What does it do? It informs where we want to create something, where we want to design something. The whole thing about geodesign and where it separates itself from traditional GIS is now we're doing design over top of that map that we've created. So all the work that went into that analysis, now we want to draw something over top. We want to maybe draw some areas around where we're going to get wetlands. We want to maybe draw some areas that do this, draw a new land use plan or whatever it might be. How do we interact with that sketch? How can that sketch, that design inform us as to how we're doing? So this is really geodesign. So we map, we design and evaluate, and then we take action. So just like that plan that the Nature Conservancy came up with, they took that plan, they gave the money to the farmers. Now birds have habitat, farmers can still grow their crops, and everything's great. We won't talk about the drought. That's another talk. So then what becomes geodesign then? How do we define this? Well, one of my mentors in geodesign is Bill Miller. He's a designer, then he was at Esri, then he was a designer, then he was at Esri, and I think that happened like four or five times, and now he's a designer. Again, he's probably retired living on a boat. Design and geographic space. Where geographic space is the life zone of the planet. So he did a great talk which I referenced at the end called Why Geodesign? And he goes into a great discussion of design itself. So design should create life in the life zone of the planet. And if we take this into consideration, again with the bird thing, or any kind of design that we're trying to accomplish these days, I think geodesign kind of fits into that quite nicely. So the academic brains behind this whole geodesign thing is Carl Steinitz at MIT, or Harvard I should say, I apologize. So what is this long definiton? Geodesign applies systems thinking to the creation of proposals for change and impact simulations in their geographic context usually supported by digital technology. It's the longest definition, but one of the things I also heard him say is that geodesign is software agnostic, and he's done some really good talks. And you can see his most recent talk at the Geodesign Summit this past year where really there's not too much software involved. There's a lot of paper sketching. There's scanning of those sketching kind of overlaying to see what kind of the designs all had in common. Some really brilliant stuff that has nothing to do with software. It's all pen and paper. So again, that's kind of where the designer lives. Where does the GIS person live? Where does the Geo person live? They kind of live in the digital space. How do we bring all this together? So why don't we then define geodesign as a framework for design and geographic space? And let's come back to our diagram here where we've mapped design and evaluate and we act. And let's flip that on its side and let's think of geodesign as scalar. At a certain scale we're accomplishing different tasks. When we look at the state of California, we're looking at the wetlands. We're not looking at the neighborhood. At that scale of the wetlands, we're gathering data at a regional level, wetland data, bird sighting data, whatever it might be. We go down to another scale now. We use that map to direct our action down to the next scale to another level. All right? So again, we're doing something at the next scale. We're designing and evaluating at a smaller scale. And once we kind of have that planned in mind, we're doing some action. So how can we think about what our technology does? All this open source technology that we have available. I think quite a bit of the open source technology that we've been looking at that we've been thinking about developing, understanding, again, offers quite a bit to geodesign. A lot of my conversations bias towards this post-GI system, post-GIS however you say it. This post-GIS system answers so many questions, solves so many problems for what I was trying to get the other software to do, I can do now with post-GIS. As a core, we'll call it a database, a spatial database. And today I ask a question, what is the difference between a spatial database and a geodatabase? Think about that. Maybe I can explain that. So at a certain scale, we've got a database that does our analysis. However that might be. It might be post-GIS, it might be. I just saw a great talk on grass GIS in here. A lot of great tools in there as well. Whatever that analysis results in, it leads us down to the next scale. What do we do? We're doing data input. We're looking at some kind of a dashboard system to give us feedback about our design. It looks at the line that we drew, the polygon we drew, the point that we drew. It gives us information about that. Is it based on what we drew it on top of? Is it in relation to itself? Is it in relation to numbers that I've attached to? An area value. We'll talk about that. And then at the very bottom, once I've come up with that plan, how do I get it out to where it needs to go? We've got plenty of web mapping tools now. We've also got, of course, tools to create the old-fashioned paper maps. We have tools to gather input and we'll take a look at some examples that are being deployed right now. And all this kind of contributes to geodesign, this framework. It's not one product. It's not one button. It's not one application. It's a whole lot of things. And there's not one answer for geodesign. Because this framework is so big, you can really apply properties of this framework and a lot of the different work that a lot of different people are doing. So I'm not sure who in here is doing simply GIS or whatever you might be doing or who might be an urban planner. If you're friends with an urban planner, you could admit that. Okay. Everybody has something to benefit from geodesign. And like I said, you may be doing it right now. You may have been doing it for a long time. So what do I think is kind of an example of a geodesign stacks? Like I said, I love this post-GIS system. Let the database do the work. This is the whole concept of a spatial database. Why would I want to run all these extra tools, create all these extra data sets, manage all this extra information when the database is built for that? And in this morning of talk that Paul Renzi did, he explained eloquently that the database just a spatial database like post-GIS, it's just data. Why is spatial data treated so much differently than any other data? You lock yourself into a system where you have to then use only spatial data tools when you should be learning, you should be using tools and learning tools that interact with data as a whole. Spatial data is an extra column. But post-GIS has a lot of functions that help you work with that spatial data within the spatial database, which I think is super helpful. So post-GIS, post-GIS, PGRouting, we can create things like spatial views and I'll show you how these kind of work and how they inform the dashboard I'm kind of going to propose here and other such data. On the desktop side, QGIS, desktop client, we know that it's open source so this is where open source helps us now because the sketching tools in there are basic. A lot of people are really focused on how can we get better sketching tools with more feedback and pressure sensitive and pen and all this kind of stuff. That's great. I think that there's a huge potential here to take open source tools and really build out a robust data sketching, data capturing kind of interface. Maybe QGIS is it, maybe it's the web. So we know that whatever that post-GIS data is, we got ways to get to it through the desktop or through the web. If it's all that same data, we don't have to worry about what tools connect to that data anymore, if the database is doing everything itself. And again, I think that's a huge benefit to geo design. So again, how do we get that plan back? Well, we got web applications, we got mobile apps, we have all the, we got everything we need. So again, I'm not trying to propose, here's this product that you should take and try to tell people that you can do geo design with this thing. It's this framework and all these tools that we have, these open source tools, not just GIS tools, open source tools can fit into all the scales and all the parts of the geo design framework. So I was playing around with this stuff in my head for so long and trying to figure out what the different parts of this are and how to bash in my head, how do I hook all these tools together and so on. And then I came along post-GIS. And I said, well, that's great, but what does it really do? And this is why the spatial database is so important. All right. So what I want to, what kind of demonstrate here is if I got data in post-GIS, and I'm kind of capturing and doing my sketching in QGIS, that data still lives in post-GIS, which means that I have the complete power of post-GIS to analyze my sketch. And I'm also going to talk about this concept of a dashboard. Dashboards are something that gives you some feedback. You have some control. You can input numbers, run maybe some buttons and so on. And there was a long time back in the last few years where there was a much struggle to create a dashboard in Excel. And the struggle was how do you connect back to these processing tools, these geo-processing tools and all the data and all that kind of stuff. So I said, well, you know, post-gray SQL, obviously post-GIS runs within. And there's a driver for Libre Office. So can we not just create our analysis living as views that give us numbers, which we consume in Libre Office? From there, now that we got our data in Libre Office living dynamically from post-GIS, we got the whole charting library and whatever we want to do in terms of number analysis within Libre Office. So here's a little crummy land use plan. This is city of Denver, parcel data. There's actually a big development going in not too far from where I live. And I've done enough live demos to just say that I'm going to take a break from it for now. So we're just going to look at some pictures. But if you want to find me afterwards, I have this going on my laptop and it's great. So needless to say, here's one place we can sketch and capture data, QGIS. Again, the web, you build your own custom data, whatever. All the tools are there if you want. All right. So once I capture some data, all that goes into the database as post-GIS data. Well, we just have to run all these tools to summarize this data, to get the numbers that we needed to further analyze our sketch. We can do that on the fly just by creating a spatial view. So I tried to format this SQL as best I could, but all I'm trying to do here is create a little view that looks at that land use plan, creates two extra columns. One that takes the area, transforms it into something like state plane and feet, gives me square feet, summarizes it by land use type. Another column, summarized by land use type, give it to me in acres. So I think in here you'll see that there's a little, the first one does a ST area calculation, transforms my geometry into state plane and then calls it area total and square feet. The second red arrow there, I just want to sum all my land use polygons, create the area, transform it into state plane again, divide it by 43 or whatever that number is. That's the conversion factor to acres. Okay. So now I got another column called area total in acres. That's a view that lives in the database. Every time I make a sketch in QGIS, okay, these numbers update. It was so hard to do this before. And Post-GIS again, just makes us, it's incredible. So again, if this is the task and this may not be the task, but if I'm sketching polygons, land uses, I want to summarize the area because I have a lot of information that I want to attach to that area. Information that comes from people like the smart growth people who will say that for X acres of office space, you generate this many jobs. If you have X acres of single family housing, you have X amount of impervious surface. You have pavement. You have grass. You have this. You have that. You have a number of people living. And you can drive all that by bringing that pivot table or sorry, bringing that spatial view into the lever office as a pivot table. You got all your numbers there dynamically updating. And I just created a couple extra columns there. And all those columns are looking at the totals in that middle column here. So I got my land use types. I got my summarized area values in acres. And I'm saying, here's all the jobs per acre because I got my smart growth book here. And it says for office, you get how many jobs per acre? For industrial, you got this many jobs per acre. And this is a basic example. All right. But I can calculate this right away. I make a sketch in QGIS. I go back here, pivot table refreshes. And I got little charts in here. Okay. Sorry. Here's my calculations that I put in. And so basically just charting, making charts from my spatially enabled data. And again, this, if you're new to geodesign, if you're familiar with it before, I think this is huge. And I was sitting there creating this and I was saying, man, I will, you, I spent so much time trying to get tool after tool after tool after tool to do this. And now I can just do it out of the box. And I say out of the box is I'm not a programmer. I don't want to program stuff. I know SQL because I love SQL. I do a lot of SQL at the Denver Public Schools. That's another story, how we can use some of this open source stuff. So again, here's the acreage. Here's just summary charts. I got a lot of open space. I got a lot of single family. Let me re-do my design here. Then I got a lot of jobs because I got a lot of commercial space. I got some office space and I'm going back to QGIS or whatever my client is and I'm sketching. And I'm getting feedback in my dashboard. Again, this is kind of a big deal. So let's take a look at a few examples that are out there right now. So shareabouts is a program here and maybe, I don't know, Critter, do you want to talk about this stuff right now? Or do you want to, I'm going to invite up a special guest. Critter is with Place Matters in Denver and, well, I don't know. Yeah, sure. Yeah. These are, so my name is Critter Thompson. I'm from Place Matters, also Denver obviously. But I've been doing similar to Matt in different capacity, geo-designed for many years as a designer and a practitioner and trying to piece together tools like he said in very frustrating ways. And so it's exciting to see how the open source community is helping build things that make my life now as a practitioner that much easier. So one of the things that we do at Place Matters is we try and, we're a little bit of a think tank and a consultancy around sort of active geo-design and those sorts of things for planning, urban design, sustainable design kind of stuff. And so we've been using and trying to tap into existing stacks, some of the ones similar to Matt just talked about, shareabouts is one that's put together by an organization called Oplans in New York. And they is a, they, it's an open source GitHub hosted. You can play around with it. But they, it's a way to bring together various tools that support the, the on the ground kind of stuff where I can easily build sites like this or like this, which are very active. And the next one I'm going to show you is currently built on it. We're transitioning it. The idea is that we can allow users to go out either this is responsive. So with their phones out in the field, capture data, get feedback on, on projects that we're interested in. So I'm zooming through this just because we don't have a lot of time. Walk Scope is one we've developed internally at Place Matters. We're using the shareabouts platform and it's around crowd sourcing pedestrian infrastructure and urban areas. So sidewalk quality, intersection quality, pedestrian counts. We're currently building on that to have a transit scope and a bike scope as well. It's been really popular in Denver so far and we're looking at other regions, regions as well. But as part of this, we were using the shareabouts platform. We're now in the process of considering potentially having both, but using a stack developed by local data, which is a spin off out of Code for America and they're now in San Francisco, again, an open source project on GitHub. But it allows us some functionality that the shareabouts didn't in the sense that you can capture the data using your mobile phone, similar. You can have various different, you can build queries really easily so that folks go out in the field. You can ask questions. They can be conditional, this that and the other thing. You can capture photographs. But then you can go online and interact with the data that's captured real time. So it's kind of a nice way both to capture the information but then allow users to then look at that and compare. So another one that we're involved with, a piece that Place Matters does a lot with, is scenario planning tools and we help facilitate this group called the open playing tools group, which meets monthly online and then we have an annual symposium in November this year. It's in DC in November. So scenarioplayingtools.org. But we are a tool agnostic in this sense, but we promote a whole bunch of different scenario planning tools and help use them on projects. This is one called Urban Footprint, web based, using an open source stack that you may have heard of. Most of their work in California, but they've done some other work around the country too. So in the interest of time, I can answer any questions or any of these projects. We've done a bunch more but it just highlights a few of the ways that you could take the geodesign concept that Matt was talking about and then use it in a very consumer friendly way in the end. And this is another thing we work on a lot, which are these touch table ideas. So you can take like the Urban Footprint and actually sketch base or some of the sketching that Matt was showing you be able to do in a group with a projector on a table and captures the input with a pen. Thanks. You bet. Cool. Thank you, Crater. All right. The last thing before we go, just another project that I'm excited to work on. We've started doing some accessibility modeling with City of Boulder. So we're taking kind of their plans developed by another tool set and we're mounting an accessibility model on top. So PG Routing creates service areas. So how nice is it that we can just request a service area from the database, build another function to spatially overlay them together and get back a heat map of accessibility. We take their inputs, we got a little open street map data as well, we got their transportation network, build it all in post-GIS, post-square SQL, PG Routing, build all of our functions. We've actually got a little mapping application with set some weights for what amenity layers you want to, you know, give higher or lower weights, combine them together and get your output. So very, very exciting. So finally, a couple of resources. Bill Miller just did a presentation at the Pacific Northwest Geodesign Summit called Y Geodesign. There's a lot of very interesting background into it, challenges, and really just philosophy about it all. And like I said, as a mentor of mine, really good to see that and looking forward to seeing him again. Carl Steins has written the book, A Framework for Geodesign, and again, his presentation at the 2014 Geodesign Summit is fantastic. And if you can find that online, it shouldn't be that tough actually. The Geodesign Summit website has all those videos linked, which I've linked there too as well. And there's now a Geodesign MOOC from Penn State, which maybe we'll sign up and take if I can find some time. That's another conversation. So I want to thank you all for being here and for paying attention and for listening to my grievances and hopefully my excitement. And hopefully next year we'll have a whole group of people presenting at Foss4G about the work they've been doing with Geodesign. And once again, thank you so much. Are there any questions? Most of the examples you were showing are 2D. What about 3D? I don't see a place in the Geodesign framework for 3D. I see way too much talk about 3D. I see too much emphasis placed on 3D as the only solution for doing Geodesign. And so I don't have an answer. There you go. All right. Maybe we should have the mic here. What is it? Yeah. Okay. So I have shown Tangible GIS a couple times at this conference. So some of you might have seen it. But Anitka will be presenting on Friday at what? At one Tangible GIS, which actually allows people to do three-dimensional sketching with sand models and solid models with direct real-time feedback about what these three-dimensional changes will, how they will affect landscape processes like water flow or solar radiation or things like that. So I think 3D is the direction that we are seeing in Geodesign. And I'm very excited to see Geodesign here at Phosphor G. So hopefully next time we will have a session that we'll be focusing on Geodesign. Good to see you. Yeah. Thank you. I guess one of the important things would be if you can do this in a collaborative manner, the sketching part and so on. Do you have any experience with that? So most of the collaborative sketching that I've seen done has been kind of scenario-based, web-based kind of things. I see no reason why it cannot be implemented at any level that we've discussed here today, the desktop, whatever level, the web level, whatever it is. As I said, there's no concrete solutions for any of this in place. And I think that the toolbox that we have available lends itself to creating whatever the task requires. So if someone wants to come up with a framework for doing collaborative sketching, I think that's something that can be set in stone and then other people can gravitate towards or they can build their own solution. Not to dismiss it. I just have not built that myself. So, yeah. Okay. In relation to your question about the 3D, it's, I don't want to advertise here because it's a licensed, you know, program of a license, but you could look into City Engine. That's very much about 3D, entirely about 3D and spatial analysis. And I have a question to you. Also, we work with Geodesign at the University of Copenhagen and we're actually holding a conference on November 11th. So if someone is around Europe at that time, please come. But the projects you mentioned, at the core of Geodesign is the collaboration between specialties, between, you know, among different professionals, disciplines, exactly. Geography, designers, urban planners, foresters, you name it. So I was wondering how was it, all the examples that you presented. Was it just one unit in your company or wherever you guys working or is it like a collaboration of different specialists coming together working in the framework? So in the case of the Boulder accessibility, and I only speak on this project, Critter may provide an answer as well. We work with mostly every type of planner at the city of Boulder from the financial planner, the land use planner, the transportation planner, and all these different people to gather feedback about what were these different amenities that we wanted to measure? What were the distances? What was the research that goes into how far we wanted to go? Gathering all the feedback about the way it's coming to a consensus from all those folks about how we would undertake this type of analysis. And so really, again, we're just wrapping the tools around that input. And that input is very crucial to what we want because, again, we can just write these tools all day long. But without the input from those people, from those different disciplines, that's really what, again, defines what we, the work that we've done. I don't know, Critter, if you have anything else you wanted to add? I'd say you're absolutely right. And I think for our organization as well, and we're not profits, so we're not, where the whole goal for us is to bring together as many disciplines as possible to be part of the process. And so the, for example, the touch table I showed you is something that we developed in-house. Just for that purpose, you go to either whether a public meeting or whether some sort of a design suret where you bring the appropriate people to the table and allows, it actually does a remarkable job in various other technologies. But that one of having everybody sitting around one table, they can pass the pen around and they can provide feedback and sketch as a collaborative group. So yeah, I think that's critical. So. Thank you. Well, all right. Thank you all very much. Once more.
|
Geodesign, at its most basic, is design with geography. It is the combination of the tools and techniques geographers and other geoscientists use to understand our world with the methods and workflows designers use to propose solutions and interventions. For instance, the typical master planning process in which GIS-based knowledge is separated from the design process can be turned into a geodesign task by sketching buildings and other land uses directly within a GIS, and seeing indicators update on the fly as various data graphics. This can then allow the designer(s) to pinpoint specific design interventions based on live feedback from geospatial information.Over the last 10 years, technology has facilitated an explosive growth in geodesign as both a framework for solving problems and a toolkit of geospatial analyses that feed into that framework. The growth of the Geodesign Summit in Redlands, CA from 2010 to 2014 is an example of the demand for this sort of framework.Parallel to the rise of geodesign, the tools represented by FOSS4G have also been evolving into sophisticated tools capable of taking on the needs of geodesign. However, to date there's been too little discussion of how to take the framework and working methods of geodesign and accomplish them with open source tools. This session will connect those dots by taking the typical parts of a geodesign framework (suitability analysis, sketching/designing, evaluating/comparing, iterating) and outlining our own experience making use of open source tools for geodesign. In particular, we will focus on how the interoperability of open source tools and the growth of web-based geospatial tools can support (and evolve!) the ways that geodesign is done.This presentation will address:What is geodesign: the conceptual framework and typical use cases for geodesignWhere are we: workflows and tool stacks we've used and seen others use to dateWhere could we go: identifying current gaps and pain points in existing stacks and possible solutions from emerging technologies
|
10.5446/31664 (DOI)
|
My name is Mark Corver. I'm with Amazon Web Services. I'm part of the public sector team at Amazon. That's why if you look up there, you'll see that I'm a solution architect. I work largely with the state local government. And because our customers in the education space are so active, there's so much going on, especially in higher ed, I spend a lot of time working with our higher ed customers also. I'm also the mapping guy on the team. My specialty and a good chunk of my background has to do with, you know, GeoApps on the web. I've been doing that for some years now. And, you know, it's interesting because this is the one conference that I've been wanting to come to for like 10 years. So I'm very, very happy to be here, to be invited to speak here. This is, you know, this is the preeminent conference about open source and mapping. And, you know, I have a real sense of gratitude for the whole open source community because that allowed me to run a business. Most of my business was actually in Tokyo, Japan, but we did a lot of projects that had, at their core, open source components. And so we were doing things like the first double byte implementation of MapServe or back in, I don't know, 2000 or something. So I'm very, very happy to be here and get to talk about something that's really true to my heart. And so anyway, so much for the history. I want to kind of try to go a little bit forward into what I hope is the future. And as you can see from my title, I want to, this presentation was actually prepared to help explain best practice around open data. And I gave a kind of a, the first version of this at our symposium in Washington, D.C. I think that was in June. Since then I've been, you know, I've given this for smaller groups a few times, this is really the second time I'm doing this, but I know I have a much geekier crowd here today so I can go more technical, which is always more fun. But the core idea here is, and this is the one thing that I shouldn't have to explain too much here, but it shouldn't be about copying data anymore, right? We live in a linked world, we, you know, especially with mapping for many years, we know what rest endpoints are, right? We know about web services. And so, you know, today I'm representing a company that provides you IT on the fly as a result of a rest call, right? And on top of that, of course, you're building systems that, for example, are restful and do all kinds of interesting things on top of an infrastructure that you can build and destroy within minutes via code of your choice. And so, I work with a lot of different use cases, so, you know, one day it might be like genome analysis, another day it might be Alzheimer's research, large universities having to share data at scale, so kind of big data for scientific analysis. And at the core of many of those use cases, and I would say that the larger they get, the more our object store becomes a central feature of that system. And so, it's the kind of, it's the, it's best practice around not, not working in terms of traditional file systems, but working in terms of object endpoints. So, in this case, I'm specifically talking about our, what's called our Simple Storage Service S3, which I'm sure many of you are aware of, and I've heard other people at this conference, you know, talking about it yesterday, right? So, I want to focus on that, and what I'm going to show you is a test set that I need to do a call out to the folks at Mapbox, let me give you a little bit of background. Mapbox asks the USDA for the most recent set of the NAPE dataset. It was a one meter per pixel coast to coast, I think it's 48 states. And it was delivered to Mapbox on 24 serial ATA disks, each one two terabytes. And then they contacted me, Mapbox is a customer, they run on AWS, and the idea is Mark, you know, this is essentially a public dataset. Can you help us out here? Can we get this into your public dataset program? It's not quite in our public dataset program yet, it might be, we would rather it be the USDA's dataset in their bucket, I'll speak a little bit more to that later. But so, what you're going to see is best practice around building a tallying system that's focused on delivery of aerial image data in this case. So, it's not, you know, it's not open street map data, it's one meter class aerial imagery. And if you have that kind of data sitting in the AWS cloud in a region, one of our regions, how can you leverage our services with the least amount of custom code leveraging open source projects to get the market the most quickly? So, what we see, what we call auto scaling application, very little code that essentially allows you to give any number of people out there that you want access to 48 terabytes of data that you don't have to pre-cache. So, I'm going to go ahead, and there's a couple of slides, I think I only have like four or five slides. I don't intend to slide deck you today, it's mostly going to be a real time demo. And I want to make a couple of points clear before we start. And so, in a sense, we're trying to correct for the problem of what in a mapping world we call, you know, clip and ship. So, typically you go to some website, maybe it's a federal website or a national data website, you go find the data, and you go clip whatever portion of the world that you want out of that and you somehow download it. If you've done this before, you know, you essentially go to a site, you might have a map, you create a bounding box, and then you might get an email saying, you know, that zip file will be available to you now, right, and there's this whole kind of manual process with clip and ship, and that's been around for many years. One of the earliest projects that I worked on was actually for Japan Space Imaging, and we built the shopping cart for satellite imagery. It's the same idea, though. So, you go through this manual process, and, you know, if you're lucky, you get email after a few hours saying it's available via FTP or something. And so that's the norm, and you still see that out there. But so, in the world of mapping, especially, you know, when you're talking about things like, you know, compressed image data, or things like LiDAR data, typically there might be some data out there, but there's this whole exercise around going and getting it, and then essentially making another copy of that and putting it somewhere on-prem. And then, you know, if it's a large data set, typically you have to worry, you know, you have the same storage problem, where are you going to put it, and do you have it close to, you know, a performance server so you can actually use it after you download it. So, there's copies all over the world, right, everywhere. So, when the USDA comes out with the new, you know, 2015 NAPE data, what happens? Copies proliferate, right, everybody now has a storage problem. But in the interconnected world of the cloud, web-based services, theoretically, it should just be one copy, right? Why should we need to have more than one copy? We do rather have one copy that's a definitive source that's well maintained, you know, well curated, it's got all the metadata, and nobody's moving that thing around. When we received the 24-2 terabyte disk from USDA, of course there were errors, right? There's a whole, you know, another week there trying to figure out where the errors are and correcting for the errors, right? So, just that, even shipping the whole thing is problematic because there's a lot of files, there's, you know, close to half a million files, including the metadata for this particular set. So, there's storage cost, there's network cost, there's a computational cost, and then, you know, since you're distributing every time there's any kind of minor update, you have this huge cost around updating those distributed copies. We bear that every day in our world of mapping. We're all, you know, we're all used to doing this, and we think this is some kind of normative pattern, right? It shouldn't be anymore. So, what makes cloud storage different? Well, one is because it's available as an endpoint that you can either, you know, make completely public or secure in a very granular way. It's up to you. It's not siloed in some data center behind some firewall, right? That's one. And then, number two is you can provision in real-time granular access to it or not, right? So, you can have, so probably the best way to think about it is many of you have smartphones in your pocket right now. If you've got, for example, an application that, you know, allows you to take pictures or gather some kind of, you know, sensor data and upload it. There's a very good chance that you're uploading that via what we call a signed link to an object store, right? Not through a server. You're loading it directly to a storage system that, for example, Amazon Web Services provides that particular application vendor. The third thing is, and this is now on the cost side of the equation. It's not simply a technical thing, right? You can offload the network egress cost. So, this is, and this is the most, probably the most important point. So, when you store data in the cloud, and I'm assuming that, you know, whether it's our object store or another service provider's object store, you're basically paying for how much data you have in there, right? Now, or, you know, for the last two, three days. And then, typically, you're charged for how much network bandwidth you use for that data going out the door. That's the network. So, that's a variable cost. So, if you've got, for example, right now with us, I think it's less than 3 cents a gigabyte a month. So, one gig costs something on order of three pennies a month to store. But depending on how often that data goes out the door, and technically that means it goes out of one of our regions, there's a variable cost associated with that, right? And so, when I say offload the egress cost, what that means is that you can have somebody else pay for the network charge. You continue to pay for storage, but you can set it up such that somebody else pays for data going out the door. And that's very important because that allows you to actually release really large public data sets without getting your network hammered, without having your FTP servers all of a sudden go down, because it's not your problem anymore. You've given us the heavy lifting. You've given us a job of doing the heavy lifting around allowing access to that data. So, in the cloud, you still have to pay for storage. It's your data. You control that data. But there should just be one copy of that data, right? Just by storing it in S3, you have 11 nines of durability. So, it looks like one endpoint for, and I'll be showing this to you for that GeoTIF file, right? But in the background, we're, of course, making multiple copies of it for you. But you can't see it. It's not your problem. It's our problem. We need to make sure that we satisfy the SLA around that data, okay? And because you can offload the network cost, you don't have to worry about network. You don't have to worry about provisioning network on your end, just because somebody might come today to come and get the data. You don't have to worry about your network getting maxed out because somebody decides to download the whole thing in one hour, right? That's our problem. You don't have to worry about compute costs, because you're not standing up, for example, traditional FTP servers, or putting it on some website anymore. So, it's not, again, not your problem. And then, because you're maintaining just one definitive copy of that data, right, you don't have to, you don't have the cost of updating all those distributed copies because they're not out there anymore. There's no need for them, right? So, you know, I shouldn't have to tell this group, but, you know, we need to think in terms of URLs, right, rather than in terms of copies. It's not about copying the data. So, today I'll speak. You know, we have, I think, we have over 40 services now. So, part of my role is to act as a guide to our 40 services. But I'll be very frank with you. They grow at such a phenomenal rate that even the solutions architects on our teams, the technical people on our team, can barely keep up. So, oftentimes, you know, if it's something that just came out two weeks ago, then I have to refer customers to the actual product team kind of thing. But I have many years of experience building tallying systems on S3. And S3 is one of the original three services on AWS. It was the queuing system, EC2, which is a virtual machine service, and SQS. So, it was just three in the beginning. But those three, at least back when I was a customer of Amazon Web Services, allowed us to pretty much build anything because those are the most core parts, right? Queuing, storage, and compute. Now we have 40. So, I'll spend a little time explaining some lesser-known features of S3. The main key item here, if you walk out the door with anything, I just want you to remember two words. And that's Requestor Pays. Requestor Pays is a feature of S3 that allows you to offload the network egress charge. And that's key to today's talk. And that's key to our best practice around government open data strategy using cloud wisely. S3 has many clients. S3 has been around for a long time. So, there's, you know, command line, like there's, you know, Perl clients. There's, basically every language you can imagine, there's a client. We have a full set of SDKs that we support on our site from, you know, Ruby to PHP to node that natively support S3. And then today, because I'm running Windows, I'll be showing you a client called CloudBerry. But there's Mac clients. And, for example, in a Java world, there's a project that's been around for a long time called Jet S3T, which is used in a lot of projects. So, it's very mature. It's been around a long time. Our customers, and so you can imagine, are larger customers like Netflix or, you know, Shell Oil Company are all using S3 somewhere at the core of their architecture. And then, so this is what I really want you to remember, this idea of Request for Pays bucket. And I will show you how this works. And here's a little picture here that kind of maps to not just how it technically gets set up, but really the most salient thing here, the most important thing here, is that, so this is a bucket. A bucket is just a top-level name for an S3 container. So every AWS account is allowed to create 100 S3 buckets. And the reason it's limited to 100 is because that's a global namespace. If you go to, for example, well, I need to be careful, I think, about what customers are named, but there's, for example, some large university customers out there that have their whole www, some university name,.edu site that are sitting in an S3. There's some emergencies sites out there that use WordPress, for example, to generate HTML. The HTML gets pushed to S3, S3 then takes care of the emergency, right, because it scales massively. So there's very, very simple architectures. But here, what I wanted to show was this area here is one particular AWS account owner, right. And that owner has a bucket. You can have 100 buckets, you can actually have more than 100 buckets, because you can have n number of accounts. But in that bucket, you can have the data of the world. The bucket has no limits. It's just an object. It's a key, you know, it's a key value. So it's just an object store allows you to keep pushing data into that bucket, right. So you can have as many tiles as you want. You can have as many Oracle backups as you want. The only limiter would be, you know, you can't have anything that's larger. Each object is limited to, you know, 5 terabytes, right. But you can keep pumping 5 terabytes into that bucket as fast as you want, as long as you want. And you won't run out of space. So from a map tiling cache perspective, it's perfect that way. And it comes up, I think, in many talks, right. And here, so this is one account. And so here's the bucket. And there's a virtual machine. So in our environment, we call them EC2, it's an EC2 server. And you notice right here that when this virtual machine that's living in a region moves data from one of its buckets, this transaction here is free. I need to back up a little bit, right. So I'll show this to you in the console in a second. But when you have an AWS account, you get access to now eight regions. So we have a global footprint. You can fire up systems. So for example, if you're a government customer, you typically be running them in our Virginia region, or Oregon region, or in our GovCloud region, which is actually in the Portland area. But just as easily, you can go to Tokyo, or Singapore, or to the EU, and do the same thing. It's just a drop-down list. And so the point I'm trying to make here is this colored area here is one of those regions. So if you go and get data from a bucket, you have a virtual machine in one of those regions, and it's doing a get operation to the bucket, that's free. Putting to the data is free all the time, right. Putting to S3 is always free. Now, if you take the data out of the bucket, out of the region, and basically pull it out to, you know, somewhere on the Internet, it could be, oops. Then there is a charge. That is the data egress component, right. And when you turn the request or pays tag on, in combination with marking whatever objects you want, public, authenticated access, what it does is make it such that this other account, so account A over here and account B, pays for the data egress charge, okay. And that's the key point. So why does that matter? That matters because, you know, the web has made it possible for everybody in the room to publish, right. So, you know, I can write a paper, I can link it to some other paper, or I can, you know, make some data set available. But with just that, I might not be able to operate at web scale, right. I might not be able to scale. And even if I was, I might have to pay the cost for network there. With request or pays, you can actually offload that to do whoever wants to get the data. So today I'm going to show you that there's many views to the same data. So going back to my point, it's just a bunch of geotests that are, you know, essentially gridded and prepped. And these are the geotests that I'm sure that, you know, this is what the prime contractors that flew the planes, that flew the Leica sensors, you know, the ADS-AD or whatever it was, they did a bunch of QA work. And finally at the end of the day, they get, you know, copied to some hard disks. And the USDA probably receives those, copies those again, there's a bunch more QA work. And then, you know, after many weeks, we can get access to it, right. But the way it should be is that, you know, there's one definitive copy of the data. It lives in the cloud. It might be in multiple regions. It might be in multiple cloud vendors, right. But there should be a lot fewer copies out there. And so I'm going to go ahead and jump into the demo. And so what we're looking at here is a leaflet. I'm kind of an open-layers guy, but I took this chance to learn a little bit of leaflet, not that I'm doing anything complex here. So it's just leaflet. And the important point here is this system, if you back off a little bit, you can tell, for those of you who are from Oakland, you'll be able to see that this is actually the city of Oakland's data set. If you back off a little bit, now we're looking at the USDA's NAEP data, right. And what's important here is I'm using one client to look at data that I can look at the, so these are tiles, obviously. That's why here we got the slippy map thing going on, right. So I'm pretty sure that these tiles have been built already before, so they're already on S3. But in a second, you'll see that if I move to another part of the USA, the tiles will take a second to come up because they're being generated on the fly. But the important thing here is these images, which are just tiles, 256x256 tiles as we're all familiar with, are based on content that's living in another account bucket. So in this case, it's the Amazon Web Services public data set account, not my account, not my working account. And if you're looking at a account, there's a whole bunch of stuff including micro, you know, biome data and genomic data, all kinds of cool public data sets. But up here, there's the top, there's AWS NAEP. And you'll see, right, so all those mappers here, we go, oh, okay, I get it. These are states, and there's California. And I rearranged this a little bit to simplify kind of the, it's not actually a directory system. These are all object keys. So it looked like a directory system so that we can then, you know, next year we can receive the 2015 data and just slide it in here and maintain that one copy aspect of it. So this is a little bit rearranged, but basically the same data. Here's the one meter resolution data. I think Idaho has half a meter, it's the only state that has half a meter. So if you go look at Idaho, it'll say, you know,.5 meters. And the original data was delivered as a group of shapefile that defined the TAL boundaries, the index, right, as you'd expect. And then the original data is all, nowadays it's all four-band, right, so it's RGB, infrared, IR. And then there's a bunch of metadata here. And if you go look at the original data, these are FIPS codes. And then here's a bunch of almost 200 megabyte files in here. And if you have an account with us, and if you use a tool, for example, CloudBerry or any tool that can do request or pays request, you all have access to this data. And all it is is aws-nap. You have access to 48 terabytes of data. Now, I need to caution you, so this is a test data set. So I can't guarantee, I can't give you an SLA that's going to be there next week, right? But if you fired up a client right now and it could do request or pays request, you can go and see this data. You can go and download this data as quickly as you want to do. And that's, I think, a very important point. So at this point, people have done this before, who've done the exercise of going to some clip and ship system, are probably realizing, oh, all I need is a client, and it could be on my notebook here, or it could be in my workspace BDI container in the cloud, and I can go quickly copy all this data before Mark stops talking into my own account because I want this stuff, right? And if you've done, if you've ordered any data from, for example, the USDA, NAIP data, you can see that this would be a much faster method for you to get access to the data. So, you know, that means you can do all that, but I'm suggesting that would be a mistake. You don't need to do that, okay? You shouldn't have to copy, going back to what I was saying earlier. So on the right-hand side is somebody else's account, right? Preferably this would be the account of whoever owns the data. So from my perspective, it would be, you know, a national agency, right? Best case. It would be state local government that have maybe banded together with other counties or something that did a group buy for aerial data, right? And they're exposing, you know, high resolution aerial imagery because it's public data anyway, right? And as long as they don't, they don't have a cost and disseminated information, why not do this, right? It's, you know, there's not, there's no cost associated with making it public if you take this route. On the left-hand side, you can see that this is my working account, and I've got a bunch of stuff in here. I apologize, I have a whole bunch of buckets that are badly named. But down here, there's one called NAIP TMS. And so this is the, you know, you can think of this as a level one cache, right? So rather than the cache being on the server that generated the tile or the servers that generated tile or in, or in memcache or, you know, whatever you want to use for your caching layer, it's just an S3, okay? It just is. I'm using S3 as a cache, and you can choose to use it as a, you know, how real-time that cache is or how long the duration of the cache is, is up to you, right? Whether it lasts for one day or whether it stays in this S3 bucket for a year, that's all tweakable. And you don't, you know, you don't actually have to write code. You just change lifecycle policy. And I can show that to you in a second. So this is just a TMS cache, just Mercator data, and it's exactly like you'd expect, right? So layers, you drill down here, and eventually, right, you see some JPEGs, right? And these JPEGs here are, right, exactly these guys, okay? And so technically, I can do things that go and just delete all these guys, and it would build it again, right? So going back to here, I'll show you a little bit more about how this works. So here, right now, I'm looking at the NAIPA data. I'm going to put it in Firebug on. I thought I just turned it on. Here we go. And as you all, you know, like to do with somebody else's mapping systems, right, we can kind of explore how this works. Let's say I got, I'm going to go all. And you can kind of see what's going on here. So as I move this thing around, if it's throwing a 303, you can't find it. And right here, I have this, you know, DNS name called, so it's ABC Oakland, told me 3D is mine. It's a domain name that I have. These are all using our content distribution network. So these are DNS names that are pointing to a content, a distribution that I've created that, again, leverages the AWS infrastructure. So this is another layer of cache that's closer to us, right? That's the way to think of it. And if I go over here, you can see that I got a couple of test layers. So I'm borrowing the MapQuest OSM piles here. And here I have a direct to the S3 bucket link. So right now I'm looking directly at our object store. Nothing in between. So just the object store. So for most use cases, this is just fine, right? Simple architecture. And over here, I'm using, I'm getting access to exactly the same data, but via our CloudFront content distribution network. So it goes to the content distribution network. Then it goes to S3, right? But you'll notice, though, that as I move this thing around, it's going to this thing called Tyler, right? Because, you know, neither the CDN nor the object store has the data. So S3 has an interesting feature where if you throw a certain kind of error, you can essentially provide a filter and you can do a redirect. So in this case, I'm doing a redirect to the system that makes tiles, right? And that's called Tyler. So if you click on Tyler and open this guy up in a new tab, you can see it just made one for us, right? And you can see that all it's really doing is taking this tile name. So this is just the, you know, typical TMS naming scheme. And under the hood, what it's doing is exercising an auto-scaling WMS service that's running on EC2, right? So a completely separate system. That is the definitive source for the tiles in this case. So you can see this in practice by just chopping this part out. And it goes, it'll go into test mode, check mode. And you can see the actual WMS request down here, right? There you go. So if I copy this guy, all of a sudden you're looking at a WMS server. So this here is a load balanced. So it uses our Elastic Load Balancer. It has, right now, I have it set up using two University of Minnesota map servers. I'm familiar with map servers. I tend to use map server all the time, especially for imagery type stuff. So I'm running two EC2 instances that know how to deliver WMS content, right? So this little piece of code here, all it's doing is going, okay, what's the tile this person wants? It just translates that into the appropriate WMS request. And behind the scenes, within region, so typically this wouldn't be exposed to public like this, it's having an auto scaling system that's, you know, easily tweakable. So you can go, it's two now, but I need to have it 20 tomorrow. It's just, it's a simple change. And it's coughing up one of these tiles, right? But it does a couple things. It services the request. So it makes, oops, so it makes sure that the client is happy, of course. So it gets the data. But as soon as it delivers the data, it fires off another thread and it copies the same data, right, to the object store. So that any subsequent request would be satisfied from S3 rather than the tile. So it's a very simple architecture. So the core feature there is you're not trying to do the caching on the server itself, but you're just leveraging, you know, available services that are available in the cloud such as S3 to do the caching. Another system does the caching for you. And so if, for example, we go to the management console over here, so I got a tab open. So I'm just curious, how many people have seen the management console here? Can I just, you see some hands? Oh, quite a few, okay. So for all of you, I don't have to explain what this is, but for those who haven't seen this, so this is a GUI that allows you to use, you know, any of our 40 services, right? Right now we're looking at one called EC2 that allows you to control our virtual machine service. And it's actually, there's another tab that allows you to do our VPC, our virtual private cloud, which is actually a subset of EC2. But anyway, this allows you to spin up virtual machines anytime you want, but more importantly, turn them off and not pay for them as soon as you turn them off. And it's very easy. You hit launch. And I'm not going to do this because I don't want this to turn into some kind of sales event here. But, you know, you hit launch, you make a couple selections, is it Windows, Linux? In this case, you know, these are all Linux machines. And then you can go far up whatever you want, right? But within the EC2 tab, I want to show you, one thing I want to show you is that I have a map server running. And down here, there's this thing called auto scaling groups. And so I have an auto scaling group, just two machines right now, building those tiles. And which one was it? This one here, I think. And if you come here, you see it, you got a couple of two, so min max two. And all I have to do is come in here. And if I wanted, you know, I could just make this, for example, max four, min four. And if I save that, then the EC2 system will just go ahead and clone a couple copies and follow them up. And typically, when you did something like that, and you wanted to scale from two to 20 or 200 or whatever you desire, right? And if you're working in a world of 48 terabytes, for example, which is actually, you know, from our perspective, it's not that large to test that, right? You have to worry about all the traditional things around making, you know, some kind of traditional file system available to map server or geo server or whatever tallying server you're using. In this architecture, I don't have to worry about it because I'm using yet another open source package. And that was in my explanation. And I'll show you what that looks like by actually logging in or SSH into one of these machines. So now I'm going back to another part of the console that shows me my virtual machines. And I just want to look at the ones that are actually running. Some of them are probably going to start up mode. So here I have one. And I can get the DNS name down here. This is the hardest part of my demos, copying this. And then I've got putty running here somewhere. I need to make sure my key is correct. The key depends on what part of the world I'm in. It looks okay. So I'm going to go ahead and open it. And then this is, I'm pretty sure this is Ubuntu. And I'm in the door. So right now all I did was set up the SSH session to one of these virtual machines that are running a University of Minnesota map server. Well, it's actually a map server, a GDOL combination. So if I hit this, you can see that I've got a couple of mount points down here. And you can see the open source tool that I'm using right there. And it's basically making S3 look like a drive. So as you'd expect, right, if I go CD, data, nape, do an LS, it'll take a second, but there's all the states. Now, remember, this is 48 terabytes, right? So this is a virtual machine that has a couple of, I think there are 160 gig or so SSDs that look local to it. And what this system basically does is it has access to 48 terabytes of data. It can go get any of those geotests that we're looking at before because it's looking at a shapefile index. So it can go get that, but it'll only go and get the ones that it needs right now to do the tiling that it needs. So it's essentially acting as a cache for the 48 terabytes. And it's caching it on SSDs that are local to this particular host that this particular virtual machine is running in. So I don't have to, all of a sudden, as long as I have an S3 and a layout is correct and the data is good to go, I don't have to maintain my 20 WMS service, the data store that they can see. It's all one, right? It's one copy, right? Now, that might be interesting from a, okay, Mark's got his system up and running perspective, but what's more interesting is because those geotifs are marked as requester pays and because every object in that bucket is marked authenticated access, if you have an account, you could do the same thing, right? And you can run your system and not have to copy the data. You do have to fire up your virtual machines in the region that this data resides. Otherwise, it'll, you know, it'll, you'll have latency effect and you'll have another cost factor to deal with. But everyone in the room, as soon as you have an account and you, you know, I'm talking about one page of code here, almost all of the code, right, is, you know, open source, you know, FOS code, right? You can have a NAIP server, a tiling server that will deliver the United States. The other aspect, remember I said the S3 bucket has no limits. You can keep pumping data in there. So for example, when next year you have all the states go from one meter to half a meter, right? What does that mean? You have four times the amount of data, right? You have much more work around probably processing the data to create, for example, you know, internally optimized. So typically you take these uncompressed files that are delivered to the USDA, you have to internally tile them and you probably want to JPEG compress them. A bunch of work, all this batch processing work that you have to do in order to, you know, get that back into your, your working stack. You, you don't have to do that anymore. Or even if you did have to do that, you have, now you have HPC resources you can use for a day to do that batch processing. Why? Because you're in the cloud, right? I'm talking really generically, right, about the cloud. I mean, that's, you know, those are the design patterns, those are the kinds of strategies you can take because we're going to, you know, a publicly available endpoints now. Not, not an on-prem environment. Okay. So here you see the data. And I'm going to jump back now into the console because there's a couple of other things I wanted to point out that has to do with S3. So here, so, so over here, we're looking at the source data, right? And that source data is delivered as foreband. But if you're just doing, you know, base layer for, you know, for example, a public site and you don't, you don't need to do, you don't, you don't need forebands, typically what you'll do is you'll create an RGB only derivative of that, right? So that's what this is. So this was not delivered by the USDA. This I built using actually Beanstalk and GDAL to make that happen. So over here, all of a sudden, you know, instead of these 200 meg files, so this is what, the 100%, but these are just, so these are the, you know, almost 200 meg files that have now been compressed, internally tiled, only three bands and compressed using JPEG at like, I think it was rate 90. And they're much smaller, right? The point I'm trying to make here is if one person does this, right, nobody else should ever have to do this again, right? So it's another aspect of one copy you should do, right? So this is also in the bucket, right? So you don't have to go, you know, you don't want to go look at the uncompressed originals because they're going to be slow, right? And you probably don't need that fourth band, you know, unless you're doing, you know, cooperated analytics or something, right? Might. But typically, you probably want this, and this data now is just in the same bucket, right? It doesn't have to be on some different volume because you ran out of space in the original. We just added, I just added it and made it obvious in the bucket, and it just, you know, it's just part of the package. So this is the kind of thing that, you know, the content owners could do, right? Because everybody on the planet is probably doing it anyway in order to reduce the amount of heavy lifting around actually using this kind of content over time, right? So that's what that is. And... So over here, I'm back in the console. I'm sorry I keep kind of jumping back and forth. So I'm looking at this bucket called Nape TMS. So this is essentially the cache, right? And if you look at this thing, there's a bunch of stuff that, you know, S3 can do or does that a lot of people don't know about. The one thing we're using right now is we're using it in static hosting mode. So S3 can act as just your website, right? So you can go upload. Earlier I was talking about, you know, universities doing things like using WordPress to do, push model to S3. Or you can have your personal website on S3. Simple to do. But one of the things that it can also do is not just enable index.html, but also handle redirection. So for example, in this case, if you get a 403, right, what do you do? And you saw a little while ago, right? You send it to the Tyler. Tyler gets the incoming request for the tile, decodes that, generates a WMS request, creates a tile, serves a tile. But more importantly, puts it into S3 for the next request. It's very simple. Another thing here is, let me scroll up a second. So there's static web, I'm just leveraging static web, website hosting. Existing feature of S3. Down here I have something called Lifecycle. You'll see that for my, from level 16 to 19, I have a Lifecycle policy that just deletes it. Right, so, you know, this is in test dev mode, so I just delete it after like 24 hours, right? In production, you could probably, you know, you could keep it live for, you know, much longer than that, right? But the point here is, typically, you know, there's all kinds of heavy lifting around even maintaining this aspect of a cache this size, large, right? Because this is on, you know, on SC, an object store, it's just a Lifecycle policy, right? It's the same model that, for example, folks in, for example, the world of video use in order to process classroom video, right? So they're taking all kinds of classroom video or they're taking video off of roadways or something. That's a little bit scarier. And then they'll pump it into S3. They'll probably encode that into another more, you know, mobile format, right? Having done that, they'll use exactly the same Lifecycle feature to pump it into Glacier, which is our archival store, which again drops the price down. It's just simply a matter of coming down here and adding a rule, right? Very little, basically no coding, it's just a setup. If you want to write the code, of course, you can automate all of this via whatever language you like to work in. This is just a GUI implementation of a bunch of restful endpoints that allow you to do things like, you know, change the Lifecycle policy, right? The last thing I wanted to speak to, to almost out of time, I think I have one more slide. I can find it. So the last idea here is, you know, typically we think about S3 in terms of, you know, static content, right? Things like, you know, our web page, our HTML-based front-end or something that basically doesn't change that frequently, right? But if you look at, you know, the more interesting and high-scaled use cases in the Amazon cloud, what you find is that, yes, of course it works for, you know, relatively static stuff like a database backup, for example, a backup of an Oracle database using R-Man or something. And that might be done, you know, once an I8 or something, right? But we have a lot of customers out there that are increasingly using S3 as more of a, you know, short-term data store, right? You can do that because all you're doing is, you know, changing up the Lifecycle policy. And this is, I think, important, especially in, for example, government use cases where we're talking about things like open data. So it maps to this idea of, you know, we have a lot of, for example, government customers that are interested in, for example, providing API endpoints to be more open. But the fact of the matter is it might be a lot easier for them to have a system that just pumps data frequently or more frequently into the object store and let the end user figure out how they want to use the data, right? So it's a very, it's a different model. So rather than it being like a, you know, a WMS or WMTS endpoint for, let's say, some kind of open government data, it might make more sense for the government customer to be pumping, you know, CSV files into the object store, basically because, you know, government doesn't know what the customer use case may want to be and the customer may rather have something that resides in something that doesn't require a government SLA, but it's just an S3 bucket because then it becomes RSLA. There's a big difference there. So rather than focusing on providing, you know, open data via APIs that are run and controlled by the government, it might make more sense, for example, for government use cases, whether it's, you know, geodata or some PDF file or something, to actually just pump that into an object store and let the customer, whether that's an individual citizen or whether that's a private sector entity that's building, you know, like a traffic application on top of that, access to the raw data from which then they can do an API endpoint or a RESTful endpoint kind of thing. So I don't want people leaving thinking that, you know, it's just good for some long-duration cache. It's actually good for very short-duration content. So it's more of a, you know, it really is a cache. It's not a static data store. That's it for my presentation. Thank you very much for listening. I'm available. I'll be here until tomorrow afternoon, I think, sometime. I'm at the booth over kind of back in the corner over there. So if you're interested to hear more, I'm happy to help you. I apologize. I'm the only one here. It's kind of a last-minute thing. I knew I was coming, but I'm like solo. So my one thing I want to ask you is if you can leave me with your business card, I would very much appreciate it because I've been told to come back with data because we're a data-driven company, and I want to make sure that we sponsor more of these events. So thank you very much. Appreciate it. Thank you.
|
Since its start in 2006, Amazon Web Services has grown to over 40 different services. S3, our object store, one of our first services, is now home to trillions of objects and regularly peaks at 1.5 million requests/second. S3 is used to store many data types, including map tiles, genome data, video, and database backups. This presentation's primary goal is to illustrate best practice around open data sets on AWS. To do so, it showcases a simple map tiling architecture, built using just a few of those services, CloudFront (CDN), S3 (object Store), and Elastic Beanstalk (Application Management) in combination with FOSS tools, Leaflet, Mapserver/GDAL and Yas3fs. My demo will use USDA's NAIP dataset (48TB), plus other higher resolution data at the city level, and show how you can deliver images derived from over 219,000 GeoTIFFs to both TMS and OGC WMS clients for the 48 States, without pre-caching tiles while keeping your server environment appropriately sized via auto-scaling. Because the NAIP data sits in a requester-pays bucket that allows authenticated read access, anyone with an AWS account has immediate access to the source GeoTIFFs, and can copy the data in bulk to anywhere they desire. However, I will show that the pay-for-use model of the cloud, allows for open-data architectures that are not possible with on-prem environments, and that for certain kinds of data, especially BIG data, rather than move the data, it makes more sense to use it in-situ in an environment that can support demanding SLAs.
|
10.5446/31665 (DOI)
|
Hi everybody, my name is Robin Kraft. I work at the Data Lab at World Resources Institute. I want to give you caveats before I start talking about big data and all that. I'm not an engineer, so not a software engineer. So, you know, I'm not especially interested in like crazy tuning like the last talk was pretty interesting, but I don't do that. I don't use stuff like GeoMesa, which sounds totally amazing, but you have to think about data patterns on your hard drive in terms of like performance and whatnot. I just need stuff to work quickly enough to get my job done. And I think that's still pretty useful. And WRI is an environmental think tank based in Washington, D.C. They do a lot of policy work in developing countries as well as in the United States. It's a policy and research shop that generates a lot of geospatial data in countries where we operate, but it's not a company like Twitter where, you know, you have massive amounts of data. That said, on occasion we do have pretty substantial amounts of data, and that's what I want to talk about. And sometimes we get into this place where we're between big data and small data or normal data, stuff that you can handle on your laptop, for example, and use your standard tools, ArcGIS, QGIS, whatever. But there's this point where it becomes... Do you have a presentation or just a talk? Oh, shit. I'm sorry. Excuse me. I've been doing this whole thing over here and didn't even know that you guys couldn't see it. I'm sorry. All right. Can you see that now? All right. Okay. Here we go. Here's... Let's see. How do I... All right. I'm sorry. Okay. So here's everything I've said so far. All right. So big enough data is when it's big enough to be a pain in the ass. So, like, I know when I see it, but there's not, like, you know, more than a gigabyte, more than a terabyte, more than a petabyte where you can sort of like draw a line. It's when your tools that you're stand... That you typically use start breaking down. So you run out of RAM on your laptop, your server is crashing, you don't have any disk space, your process is running for weeks or potentially years if you let it run to completion. Stuff just doesn't quite work anymore. And so that's a... That's this point where small data is... You're not in the realm of small data anymore, but the big data toolkit might be overkill. You don't necessarily have time to learn HBase and... Hive and, like, everything in the Hadoop ecosystem and Spark and Cassandra and, you know, all these really amazing tools that most of us don't really need unless you do. But in most cases, I don't. So there's this sort of awkward middle ground where you need to find tools that will support the operations that you need to do, but that aren't necessarily the standard tools that you would have on your laptop. So I'm going to talk about this. This is in the context of globalforestwatch.org, which is an initiative that brings together a bunch of partners to put the best scientific data about forestry and deforestation on the web through, you know, nice web maps and the like. And before I go on, I want to just show you what that looks like. I'm going to be talking about two data sets in particular. This one is a...let's see, so you can see all that, yeah. This one is a Landsat-based global data set that tracks forest loss and gain over the last 12 years. Let's go look at the northern urban over here. This is global 30-meter data generated with Google Earth Engine. So there's some deforestation in pink here over near Mount Hood. I don't know what's going on there, but if we switch to satellite, you can see some things going on. And then there's blue regrowth in various places. So this is a pretty amazing data set generated by University of Maryland and Google, and it's global. So that's the first for anything like this. The other data set that I want to talk about is forest monitoring for action or FORMA. That's the one I've been working on. And that is a modus-based system for tracking deforestation hotspots or we don't like to say deforestation because that's politically charged. But forest loss hotspots, however you want to define deforestation is up to you. But where there are trees and then there are no more, that's what we want to identify. So I'm just going to zoom in here to Indonesia, one of the major hotspots. And what's interesting about FORMA is that it's updated every 16 days. So you can see the sort of viral spread of deforestation across the landscape as you can hopefully see here. And the idea is that we want people on the ground to be able to react quickly to forest loss as quickly as possible. And, you know, this is, for forestry, this is considered near real time. In the past, for a country like Indonesia, you'd get a new map of deforestation every couple of years. So, you know, with the 30-meter annual Landsat-based data from University of Maryland and our 500-meter 16-day resolution dataset, you have some pretty cool new tools built into Global Forest Watch that you can use for if you're doing stuff with international forestry. So back to the presentation. Why is this not full screen? All right, here we go. Okay. Done with demo. So now I'm going to talk a little bit about the nuts and bolts of how FORMA works. But first, I don't know how many of you saw the interesting talk yesterday by the guy that does leaflet. He's talking about how simplicity was a guiding, what is the goal of, should be the goal, one of the goals. And, you know, I think that he made a lot of good points in his talk. But one in particular that I heard actually at another conference was that simplicity in some cases is better than optimal because basically an hour of human time, I heard this recently, if you can, with an hour of one of our salaries, you can buy 400 hours on Amazon to crunch whatever stuff you're crunching. So there's a real question about how much time you want to put into optimizing the hell out of a process when if your process can just scale, you can save a lot of money and time that way instead. And I know that this is not going to work in every case, but it's something to think about, something to keep in mind. So FORMA is basically an image processing algorithm. It takes in a lot of satellite imagery from NASA, the MODIS vegetation index dataset. And then we do statistics. Basically what you see here is one pixel with the vegetation intensity shown over time. So this is the NDVI. It's basically a measure of vegetation intensity, your greenness, and it has seasonal fluctuations as even in the tropics where there's, there aren't seasons as we would, as we would recognize them here. But the important thing is that the NDVI, even with the seasonal fluctuations and, you know, cloud cover and whatnot, it does have a pretty seasonal, pretty seasonal, pre-predictable behavior. So if you have something that goes from green and intense vegetation to brown or not very intense vegetation, and you happen to have fires around the same time, which you see at the end of that time series, that is something that might be considered deforestation. How exactly we classify as deforestation depends on a model built around historical deforestation. So we're looking for patterns in the NDVI signal that are indicative of deforestation. But the point is for the purposes of this talk is that we have these pixel, we need to build pixel time series so that we can run regressions on them. We need to do spatial joins so that we can bring in other data sets like rainfall or fires, which are not in the same format as the raw NDVI that we're using. We need spatial filters because we're not actually, we don't care about deforestation over land, right? So, over the ocean, right? So we need to filter that out and we need to be able to do statistics, just, you know, standard statistics that you would want to do in, like, econometrics that are not necessarily designed for working with images. So when we first started doing this, we were using one or two desktop machines. They were just both hitting the same hard drive. We didn't really know what we were doing at that point. And we were using ArcGIS and Python using some data in NumPy to do the actual math. And it worked, but that was for a very small number of pixels, just to show that the algorithm had legs and that it worked in Brazil and Indonesia, just on these little postage stamps. So we then have struggled, we struggled subsequently on how to scale that up from 10,000 pixels, 100 kilometers squared, one kilometer resolution to 100 billion pixels at 500 meter resolution covering the whole tropics. So the sort of insight that we got was that if we treat everything as a raster, that helps us in certain ways and causes problems in others. But I'm actually, unlike this guy, I'm a fan of rasters. I think they're really amazing data type. And if you treat everything as a raster, you can treat everything as text because at the end of the day, you can convert a raster into a text, you know, this very simple pick raster here, change it into row columns and values and you have a, you have something you can throw into a database or just write as text files. And since everything is also a raster, you can, since everything is a raster at the end of the day, you can convert anything into, anything that we care about into rasters, whether it's points, polygons, lines and whatnot. You can convert those same pieces of data into text and that's great because Hadoop loves text and that's where we get to the biggest data questions here. So the problem here is that Hadoop is not simple. I don't know how many of you are familiar with writing map readers jobs, but it's not a very intuitive way of thinking about how to process data, especially geospatial data. So what we ended up doing is using a technology stack that uses closure, cascading and cascalog to basically take away a lot of the pain of working with Hadoop. We also run this on elastic map reduce on Amazon, so that's also convenient. But closure is nice because it's a cool, very nice language to work with. It's a Lisp. It's very elegant and if you're in the Lisp, you'll appreciate closure. If you don't know closure or Lisp, it's just a, you'll see a little bit of it today and it's weird, but it's good. Cascading is a really cool library that basically writes map reduce jobs for you. So you give it, you give it, you basically tell it what to do and it will write the map reduce jobs for you, run them on your Hadoop stack. And cascalog is just a closure wrapper for that library. So you get map, the benefits of map reduce, which is a lot of scalability if you can express your problem in terms of map reduce, but you don't have to think about map reduce. So here I just want to do a little bit of basic code walkthrough. It looks like that's kind of tiny. I don't know if I can, that might be a little better. So all this is doing is taking in a data source which has row columns and values, multiplying the value by five on that last line there, and then spitting out the results. So the row column and the new value. It's, this can run just on your laptop. It can also run on a massive cluster. And you see at the bottom what you get. One of the other interesting things about, one interesting thing about, interesting thing about cascalog is that you can do these implicit joins. So in this case we have a pixel source, you know, just the row column and value. And we have a data set that represents the countries that those pixels fall into, which we've generated previously somehow. So basically by giving, by taking a pixel source and a country source and naming the fields the same, so the row and column are named the same in each source, you can see, do you see my pointer? Yeah. So like right here, pixel source, row column, row column there. We can do an implicit join by specifying row column in the output vector here. And there we've done what amounts to a spatial join in three lines of code on your laptop or on a hundred servers. This is getting more complicated, but not really, because all we're trying to do is join fires, which happen at a certain latitude and longitude with a country and trying to count up how many fires happen in each country. So again, you've got, we've got a fire source here that has a latitude and longitude, the date and the brightness. We've got the country source, which is in rows and columns again. Sorry guys. We have a function that just converts from latitude and longitude to rows and columns. We filter on the brightness because we only want hot fires. And then we count up how many fires happen in a particular country because the country is what's in the output vector. And so it does an implicit join again to give us the result at the bottom where Indonesia is a hot spot of fires. Here we have a pixel time, we're trying to build a pixel time series. So that's essential for doing our regression analysis. And so, you know, you've got, you know, you can imagine this being having originally been four different rasters, two pixels and four different rasters over time. All we have to do to have this scale from one laptop to a hundred servers is have a function called build series that takes in the date and the value and spits out a time series, which, you know, is just a vector of values. So this is what we get at the end. Just a nice, nice clean vector of values. If we're using this subsequently, we can then, we can use this, these values to, to like pass a regression line through and see if the, see if the values are statistically significantly changing over time. Just standard, standard stuff. So that's, that's how we, that's how we do a lot of the data manipulations in the, in the actual form of algorithm. The 90% of the code is actually just moving data around, doing the joins, doing the, bringing in different data sources, making sure everything lines up correctly. But then, and then, and then, you know, we had to, we have to do just standard statistics. So there's, there are software libraries that take care of that for us. But then, once we have our data set of all the deforestation that, that we've detected, we need to put that onto the map that I showed you. So the guys at Visuality developed the site for us and they came up with this crazy data type, which is sort of like vector tiles, except it's just text, not some binary, binary format that Mapbox is working on. But basically, you have these X and Y, X and Y fields that tell your browser where to paint a pixel on the, on the screen. And so you get these really smooth, the really smooth animation that you, that you saw on the, in the demo. So that's just doing one, it's not swapping out tiles at all, because that's really inefficient, it's just redrawing, redrawing pixels as time moves forward. So this sequel here is what we were using to generate those, the different zoom levels. Andrew Hill at Visual, at CartierDb, wrote this code. I didn't have to think about this, which is really nice. But the problem was that it got really slow and sometimes we're getting server timeouts for a large table, it just was not efficient. So I was looking for, unfortunately we launched using these and this was, I was at, I was up at four in the morning on launch day trying to make sure that these sequel queries finished because, you know, we just kept updating the data. We had to launch and the sequel queries kept failing because we were also, you know, we were getting launch traffic and anyway, it was turning into a nightmare. So what we do now instead, because it was hard to test, it kept breaking, is we use Cascalog to generate the values that go into the table. So we have this very simple calculation that takes a, takes an XYZ coordinate, calculates the values at different zoom levels and the updates the XYZ values. This is the Cascalog query that actually does this over that, or no, this actually generates the XYZ coordinates from lat long or from row and column going through lat long into XYZ. And then you, as a result, you are transposing the time series and into long format which we can then use to count up how many deforestation events happened in a particular area. And that's how we then paint the change in the forest cover on the website. And then the query that actually generates all those zoom levels is just this three line, this three line thing. This is just telling us where the data comes from, generate the tiles using the function above, count up how many deforestation events happen in each of those tiles. And what's nice is that, again, that scales from your laptop to 100 servers pretty simply. So the nice thing about that is instead of having to babysit these SQL queries hoping that they finish, hoping that the server is not under too much load to handle the update, instead we get something that's basically infinitely scalable. We can test every bit of the code before we deploy it in production. It's a very reliable process and it's fast enough. So we're not getting, this process might, you know, instead of taking, instead of having something super optimized that would take a few minutes, this might take an hour or if I throw more machines at it, it'll take 15 minutes. But the idea is that we don't have to always optimize everything down to the last millisecond. And if you can horizontally scale your process, you can just throw more machines at it until you get it quick enough for your purposes. So just to wrap up here, the lessons for big enough geospatial data is the first one, echoing Mike Bostock yesterday, his great talk, was find the right tools and actually use them. Don't get stuck using the same old tools that you've used in the past that aren't quite the right things for what you're trying to do now. There are a lot of great tools out there for doing process, distributed processing. This is just a sampling of them. I've used Hadoop's Star Cluster, Spark. Actually, I guess I've used all of these except for GeoTrailers. But depending on your use case and your application, each of these can have a role in processing data sets that you wouldn't otherwise really be able to handle in your normal tool set. It's useful to keep in mind that simple is or can be better than optimal. You're very, very expensive. If you can get computers to do your work, you're saving yourself money and time. If you're creative about data formats, if you're not worried so much about, you know, using strict geospatial data types and indices and stuff, if you're creative about what geospatial means versus just pure text, you can explore some tools that otherwise would be unavailable to you. The last thing is that Hadoop can be really great, but it's very powerful, but it can also be really painful. So keeping things simple with a library like Cascading that will do the work of MapReduce for you is a really nice thing. So that's that. My name's Robin. Any questions? I got a question. What's up? Oh, sorry. So closure is really cool. How are you distributing and managing? So like you actually want to, your data sets are really big, so now you need to scale out to 100 nodes on AWS. What are you actually using to manage that job and then distribute the data set? So we actually work completely on AWS, so we don't ever have the data locally, so we just keep it on S3, and natively it's available to our Hadoop system, so it's just all there, and it's available to every node. So philosophically, I'm trying to wrap my head around the raster to text expression in terms of thinking about your spatial data as a text. In terms of how do you assign a persistent ID to raster to figure out what part of the world that little piece of text talks about, could you just give me 30 seconds more of that? Sure, that's a great question. So, modus data are split up into tiles, so there are 10 degrees across, so you can, based on the, like a given latitude and longitude can be converted into a tile coordinate, and to tell you, you can figure out which tile it falls into and then which pixel within that tile it falls into. So we have this mapper from lat long to tile and image coordinates, and that's, so we can go back and forth. And that's persistent for the modus? Yes. This library? Yeah, the modus is great for just how incredibly consistent and systematic it is. Got it. So that's super helpful for us. So how was the data broken down when you're processing it? Did you process it all, or, I don't know too much about the Hadoop cluster, but does it process it a chunk at a time? Yeah, so if, so you can, so at the beginning we have to start out by processing each file, each image file individually, because you can't split those natively in Hadoop. So we just read in one file, basically split it up into chunks and spit it back out. And then those just go into text files, which, well, sequence files, which is Hadoop's binary format for storing data. And we just, it just spits out rows and rows and rows of values and Hadoop handles all the splitting for us. So we never really think about that again. Got it. Yeah. We started playing with this quite a bit too, and I still have to wrap my brand about, you know, that geographic text conversion. Is, will that always be the way it is? Will people write geographic wrappers around such that Hadoop can, not Hadoop specifically, but something on top of Hadoop can take geographic data more natively? Yeah, that's a great question. There was a pretty cool talk yesterday about a project called GeoMesa that does just that. And it sounds like a very high performance way to do geospatial natively in the Hadoop context. The more you get into trying to do, like, real geospatial, the more complex it gets. And you have to ask whether the return on the investment is worth it. And for us, it wasn't. But then again, we're also not computer scientists. So, you know, learning, like the learning curve, like, it was bad enough just having to get up this learning curve, going that much further was just, it was just too much. But there are people that are working on that. I'm hoping that there will be a tool set, a geospatial tool set that's just sort of like works out of the box or like Postus is today. And, well, out of the box-ish. Like Postus is today. But I haven't come across that yet. But I'd love to know if somebody else has. All right. Thank you.
|
Big data gets a lot of press these days, but even if you're not geocoding the Twitter firehose, "big enough" data can be a pain - whether you're crashing your database server or simply running out of RAM. Distributed geoprocessing can be even more painful, but for the right job it's a revelation!This session will explore strategies you can use to unlock the power of distributed geoprocessing for the "big enough" datasets that make your life difficult. Granted, geospatial data doesn't always fit cleanly into Hadoop's MapReduce framework. But with a bit of creativity - think in-memory joins, hyper-optimized data schemas, and offloading work to API services or PostGIS - you too can get Hadoop MapReduce working on your geospatial data!Real-world examples will be taken from work on GlobalForestWatch.org, a new platform for exploring and analyzing global data on deforestation. I'll be demoing key concepts using Cascalog, a Clojure wrapper for the Cascading Java library that makes Hadoop and Map/Reduce a lot more palatable. If you prefer Python or Scala, there are wrappers for you too.Hadoop is no silver bullet, but for the right geoprocessing job it's a powerful tool.
|
10.5446/31666 (DOI)
|
Ie, Mike is on now, that will help. All right, sorry, so I'll go with that again. We have officers across the US, two in Brazil and one in China, dealing with mostly contaminated land cleanup, but we have a Marine Science group in the northwest. We have mining groups in Colorado and Montana and chemical forensics groups in Boston and New Jersey. Excuse me, fooled me.וע Whispering Cloudbanners was celebrated. In the U.S., I was in the UK for ten years and I was welcome to go there. Doon y majority of my few by growing up in the South Africa soon came to realize that the resources and funding were not available to a large portion of the population for various technical, technically-orientated projects. So that naturally led me towards the open source arena, so that's where really my passion lies, and I've taken that with me into the consulting business, which is where I am now, based in Atlanta. So, when I arrived in Atlanta, the first thing I saw from my experiences was there was a need for change. Many of the users were using access databases using the personal geodatabase. We are partially as we shop. We have ArcMap as the desktop product. I've actually got no problem with ArcMap. I think it's a fantastic product. The issue was really here when I arrived is that they were using these personal geodatabases. Project collaboration often involved emailing databases about issues with currency. At times we might have been presenting outdated data to clients and just the standard issues that you might expect from a single user databases. In addition to that, databases that were exceeding the MS access limits with the amount of data that we had and finally any web-based mapping applications, those efforts were disconnected and difficult to maintain because there wasn't any binding on to the base data. What was the solution we were going to apply? I looked at this and said to myself, what are we doing really well and how can we do it better? What we are doing well, the engineering part of it seemed to be really spot on. Our ability to analyse data was excellent. We have a painted EDMS system which allows us to fine tune the data and QAQC the data in a really great way. That means that the data that we are getting once it has been through this system is in an excellent condition for which to base informed decision making and do a lot of analysis with. How can we improve on that? First of all, to get rid of this single user access database situation, so it was the introduction of Postgres as a centralised multi-user data solution. Then it was a case of building tool sets that make the transition to Postgres SQL a lot easier. I introduced QGIS as an open-source solution, an alternative to the ArcMap product, but for those of you who have not explored it yet, it is pretty impressive. I was blown away a couple of years back. I checked it out and I think it needed a bit of work, but it really has come a long way in a relatively short span of time. It is superb at connecting to Postgres SQL database. I introduced mobile solutions. I also do an annual staff training on the fundamentals of SQL, introducing getting users more comfortable with the PostGIS and Postgres SQL environments. Then looking at basically a company-wide platform for sustainable organic application development. The idea is that given our various pods across the country, they all have different roles. Some of them are very specialised, some of them are less so. The idea is that we have a common infrastructure with regards to technology across the company. I do not really need to know what the specifics are of what they are doing, but they now have a common infrastructure on which they can develop resources and tools to support the specifics of what they are doing on a daily basis. For example, they could hire a programmer and regardless of that programmer's background, they would then have a platform that would work with whatever the programmer is familiar with. Challenges. Working with engineers, I guess I should put that as the first one. The need for solutions without taking on an extra workload. Bring a solution, but we do not want to do anything. We do not want to get involved in any way and we do not want to be bogged down with anything extra. That really dovetails nicely into the second point, which is the aggressive enterprise solution marketing campaigns. It is one of the bigger challenges. I think at this point I considered getting everyone to sit in a circle and entering a slide entitled, my name is John and I have used ArcGIS server. A jokes aside, it is very difficult dealing with the aggressive marketing campaigns, especially given that the engineers have very little time to tack these things on. I think one thing the open source community, especially Fast 4G, could really do with is a marketing force that EdgeVy has that would really do wonders for the open source community, I think. Anyway, so it is really getting these promises and that the engineers are kind of reading about on a weekly basis, coming in by email, by letters, what you name it, and trying to convince them that it is not exactly as it looks. But one of the other main ones is a culture of this is how things have been done for 20 years, coming up against people that are just accustomed to doing things a certain way, they kind of set in their ways and it is very difficult for them to embrace change. So how do we go about breaking those down and then really it is not only to show people what Fast 4G is capable of, but that they themselves can effectively embrace it. So I am finding a lot of the time that the younger users are picking it up a lot better embracing the QGS and the PostgreSQL and often times folks that have had no experience with access databases at all are definitely doing a lot better with understanding the concept, the SQL concepts and the PostgreSQL databases. And so that is, we kind of got a fairly top heavy in terms of the senior level of our company, but there is a younger generation and I think things are definitely changing and there is a lot of energy going on with regards to the applications we are developing. And just to echo some of the items that Vladimir and Macbasker touched on is that tools are built with a specific purpose in mind and applying an enterprise solution to something very specific often is not the best solution and it certainly is not in our case. So I identified three types of PostgreSQL users within our company. The first of the program is the developers. This is the smallest group. And I head up a team of developers. We do application development that distributes these applications. That is the idea to distribute them across the company to facilitate the movement of data, realisation and access to information. So the smallest group, programmers and developers building applications. The second group is your analysts. They might get involved with, in the PG admin interface, perhaps the command line, getting the hands a bit dirty with the SQL, creating queries or additional tables that are available for the remainder of the colleagues to kind of digest using our map or QGS or other applications that we might build for them. And then the final category identified is basically everyone else. And these are the folks that are actually using PostgreSQL without even knowing it. So you might have that little add layer button within the desktop GIS application. And they don't know that it's coming from the PostgreSQL background and nor do they care. So it doesn't really matter, but the fact is that they are using data that is current. It is shared by everyone. And it's a good thing to do. It's just best practice really. So the GeoStack that we're using is PostgreSQL, PostgreSQL. We've got map server going on. I'm interested to see what GeoServer could do with us. We're currently using OpenLayers 2. They're certainly not limited in any way to these. And I'll go into that a little bit more on later. And that's one of the beauties of embracing the open source, the FOSFORG setup, is that you can really mix and match and kind of use whatever you want. You're not really set or stuck to any solutions. So this could kind of change on a daily basis really. And for us, we're in a position to do that because each of our projects is so different that we can embrace different things. So a relatively small amount of code can render a surprisingly informative visualisation. This particular code is in OpenLayers 2. It's literally a heat map layer. I haven't seen any heat map. I thought there would be kind of quite popular. It seems to be quite trendy on the way with the moment for everyone to be doing heat map. I saw Google's library has just been released for the mobile application or the heat maps that are going on there. Can be absolutely useless, but if used correctly, can provide a very informative visualisation. So the code here, if anyone is interested, we simply, I think the heat layer is actually declared above this. But we've got a URL pointing at the GeoJSON, declaring that it is a GeoJSON format. And then we're simply getting the style and the results back from that and then zooming to the extent of this particular layer. So if we're looking at the results of this information, we can see that this is a heat map. That's good. I don't know if you're saying. Okay. So we've got a heat map of the Maria de Grasse project that we were involved in in Rio de Janeiro, Brazil. And it is of water contamination, I think the contaminants in concern is vinyl chloride, I'm not too sure. But you can see we've got the standard kind of web interface, the logo, the home button, scale bar, left long, a bunch of buttons that kind of in divs that can be retracted. And I've actually got bought a logo from one of my favourite artists, Paul Davies. I'll give a shout out to you just now. But the point really is you can have a lot of fun, but at the same time deliver the client exactly what they need. We've also implemented various tools so that people can change the cell size and the contaminants in concern with regards to the heat map so that it changes according to your contaminants or according to your cell size or the different biscuits that you can apply. Here's another typical web mapping application we've got. I'd just like to show you how we've incorporated D3 into the application. Very excited about the D3 library because it's really opened up a lot of things with regards to charting, cross section tools. At university I did a lot of SVG. It seemed to kind of fizzle out because of the browser support and then I guess Mike in his library really picked it up and breathed life back into it so I was very excited to see the library was available to us. I'll just run this quickly. I've got tool sets. We've used this in another really beautiful part about the open source community. There's so many things available. These containers here and the side widget that expands is available from a gentleman called Matteo Bicochi from the Propunzi Open Lab group. He's got a bunch of great widgets that you can apply and use as much as you like. Let me just start this for you. Windows can be closed and moved about and so on and so forth. They don't have to. You can set them still. You don't have to use them at all. You can use another product if you like. It's just again easy to mix and match. We've got a sidebar, the layer legend. We're turning on a ground water layer here. We're going to use a custom tool that we built. It's a D3 tool. We're going to select a bunch of locations and get time series plotted out. This particular time series is plotted out at various depths for the trichloroethene contaminant of concern. But there is a dropdown for you to change it like this, in which case it makes a call to the database, which is the PostgreSQL back-end database, and regenerates the D3 being displayed. As you mouse over it, you'll see the results that are coming back. It's just simply scalable vector graphics. Here's a cross-section tool. The user can dynamically generate the cross-section using the map. Again, D3 churns out the values. All the data is held in the PostgreSQL back-end database. Then the user is able to choose a contaminant of concern that gets displayed on the borings and the screens. Then you're able to identify the lithology layers. You can go ahead and click on any of the points to get a time series data. You get an idea of how that contaminant has performed at that certain location over time. It can be changed and sized. We've got a Google base map running at the moment as the base layer. Again, that could be changed up. There was that particular one. Quick shout-out to Paul Davies. This goes by Matt O'Hann as well. Fantastic artwork. I love his stuff. He's actually got a library for the Mac. I've got his icons on my Linux machine. This is another D3 application. It's a static cross-section this time. You're able to draw the lithology on a little bit better using other software. We can draw the lithology on there. Then, again, you've got a time series map at the bottom. We can identify the results according to that. We'll go a bit, see how much time is coming to, and then quite fast. User considerations. For our company, we've got very specific use cases. Someone either wants a cross-section, they only want to see vinyl chloride data, or there's three contaminants of concern. We don't really need to worry about every use case because it's usually a very specific situation. For us, it's more important to understand the client needs. Really, it is to provide the client with a simple, intuitive user experience and for the ability to get at the data that they need in as very few steps as possible. We've got the story of the client who simply refused to use any web mapping applications. We built him an interface with just simply one button, and his response was to phone the engineer directly and ask for an email. At the end of the day, it's really about what the client was and their level of comfort with technology, and it's understanding that. As much as we would like to embrace Mike Buss's best practice with the D3 library with colours and use the colour brewer, and so on and so forth, oftentimes the client has the last say with regards to the colours that they want displayed on their applications. Fast4G is a solution. Which way do we run? For those of you who missed the Paul Seaguy reference, there is much discussion between Dr Lesh and the 10Gener, I think, and Diane Carrolans mum as to which way she should run towards or away from the light. That was a question in our company, and it really was, do we go for it? There's no marketing here. It really is a great area. Do we jump in into this? Do we go down this path? As far as I'm concerned, believe the hype, it's a really great move to do. If there's anyone here who is kind of toying with the idea of perhaps making a suggestion to go in that direction, I would highly encourage you to do that. It's just really opened up doors for us. Our clients are seeing good results. We've got the term business, we're getting extra clients. We've got freedom to mix and match. It places us in a good position to respond to clients' needs regardless of their hardware or software infrastructure. For me, my favourite part is just a bunch of fun to work with. It's so much fun. There's good people. It's a fantastic community. I think just being here, you could probably sense that. Everyone's ready to help. It's reflected in the company product and for the management, that means money. Just a few shout-outs to the reference. We've got Alexander Bray who helped us develop a plug-in for QGS. A fantastic gentleman based in Ukraine. We have made that available for free on the QGS. It's called QSCATA. He's the person I first contacted with regards to training web GIS. He also helped develop a plug-in for QGS called DB Switcher, which helps us working offline with the postures environment. I've mentioned Matteo Boccaci, Paul Davy, and if anyone is interested in following me on Twitter, I'm at Danilius. That's the end. I have a few pamphlets and cards if anyone is interested. Thank you very much. Very much. Very much. Thank you for your presentation. I have a question about how you configure some of your architecture. I'm novice at this, so if I ask a silly question, forgive me. You have multiple users in multiple places around the globe, essentially. Are you serving this from different servers that you have to synchronize? Are you installing one postgres database with multiple, or one postgres instance with multiple databases within that? Or are you doing that in separate locations, essentially? Then follow on on that. You described being able to mix and match your web stack. Are you doing that literally by project? Would you have postgres feeding through map server for one, and then postgres feeding through some other server for another project out of the same database? Sure. Yes, we can do that. What we do is typically we set up a database for each project. It's a new database. We're not having one client sharing a database with another client. Obviously, various reasons for that. We're in a transitional period at the moment. A lot of our pods have their own servers. Some are using the servers of other pods. We also have a cloud server with rack space. We're looking at that. I'm pushing more towards that solution because you can mirror these servers on the east and west coast. I believe they're looking at moving to Brazil so that works for us because we've got an office there. As far as China's concerned, they're kind of on their own. They will have their own server too, or they might simply use our server. Your second question. We do have multiple instances of postgres setup, either on the local server. We also have much movement of documentation that needs to go on. We're finding that local servers obviously support higher speeds, transfer speeds. But for the company-wide collaborations, depending on the project, if it is a project by nature which is using many of our groups, which is dispersed across the company, that would lend itself a lot better to being on the cloud-based server so that all the users can collaborate nicely on that. The second question, yes. Depending on the project, again, for me it's all about trying new things. Every project I want to try, let's try and leave it and see what it can do. Let's find out what its limitations are. All projects lend themselves to that given the timeframe. If it's a very short timeframe or funding, we might have to just kind of replicate any existing one, but I'm always trying to make improvements, updates, do things better, get best practice in. But really, yes, it's very easy to, sometimes you don't even need a map server backend. If your date is not heavy, just use the vector front-end from Ogunleys. There's no reason you couldn't do that. That all binds nicely together with Postgres and leaflets, views that, and even the Google Maps. We do a lot of pro bono work. Google Map interface, the API, I find pretty good as well. I enjoy working with that as well. There's just so many possibilities. You don't have to stick to one stack and don't let anyone tell you that you have to either. For me, one of the issues coming here is that there's so many shiny objects. You need to maybe pick one or two and stick with that. We're leaning towards being a more Python-orientated house. All those shiny objects that have Java associated, I should probably just put my blinkers on and get down. We embrace Django. We're looking at Bottle and Flask. As I said, all those are the ones that Google Maps and leaflets certainly look nice and lightweight. Just something that you can throw up really quickly. Again, fun is the bottom line. I really love it. It sounds to me like what you're describing. Every project is a one-off. Is there any thought about trying to come up with some kind of internal product that you can use to roll out rapidly that provides 80% of your functionality? We do have templates that we're running out. For example, as soon as the project is set up with accounting, so I have to have a funding thing set up, that triggers a response to create a database by that project's name. It has an open-layers web.js set up with dummy data of point line and polygons so that the users are able to just change that as quickly as they can. That's rolled out and available as soon as that project is initiated and it's ready to use. There's absolutely no work in doing that because it's set up already. Users have access to that project. We get a bunch of user names and characters who are set up for the project. We set them up as users on that particular project. They have full access to the project. They are ready to go with a login to a web-based mapping application. Given the nature of our projects, yes, there are a lot of work that can be repeated from project to project. You get someone who just comes in sideways and says, hey, we need to capture a bunch of photos for this particular mining project or something. We might get our guys to go out and fly some drone coverage or something and they need special aerials or something. There's a little bit of customization that has to take place. Getting a template which is able to fully address each project is, we're not finding that a realistic solution, but certainly get 80% of the way, yes. I was just wondering if you could speak a bit to the mobile segment of your workflow. Was it used for data collection, that sort of thing? If so, what's your platform of choice and do you guys develop it yourself? Yes. I prefer developing in Android, but that's not really my choice. If you do hard building mobile applications, it has to be available in, again, whatever. It really is the client. If the client wants it on an iPad, they get it on an iPad. We can do that. I prefer the Android platform just because we can build the APK set and send it out within minutes if there are any changes required. It's slightly more complicated with the Apple development. We build that. Again, the beauty of the mobile applications is that it's pulling from the exact same back-end PostgreSQL database. We've got people with desktops looking at the database. We've got web applications that are using the same data as the people in the desktop can change. If they change that data, it changes in the web application. We've got tables that the mobile applications are pulling from. When they open the application, they can see perhaps the data, depending on their password and Paul logs on. Paul only sees the boring locations in zone three because that's what he needs to address today. Then, at the same time, Paul can enter data and it gets submitted to another table. These tables can all be consumed by web applications, by local desktop products, or simply by going directly into Postgres using the PG Admin 3 or the command line interface. You can see these things being updated in real time. I hope I've answered your question. The cross-section tool that you showed there, is that something that you developed in-house? Yes. Do you have any plans to outsource that at this time? It's really, I don't know if it looks complicated, but we've got the EDMS system that I was talking about, the environmental database. It has a table with the depths of the top of casing, bottom of casing in the world. You've got your X and Y there, and we've got the screen levels. Then, we do a top of screen, bottom of screen, divide that to get the central location on which to plot the contaminant of concern. You plot that and then, obviously, just adjust the size and the color depending on the break. A short answer, I'm sorry, I'm babbling on, but a short answer, no, I haven't considered putting it out. There's certainly something I would consider doing, because I think we get so much help from the open source community. Giving back the QGS plug-ins, I would certainly like to do the same thing, so that's something that we could possibly look into doing. The difficulty there is that each project is just slightly different, like the fillers will be slightly different, names slightly different, so there always is a bit of a customized hack that we have to do. It's very difficult, and then, all the problems associated with cross sections, there's also a setting there which adjusts the extent to either side of the transect that you're grabbing locations from. Your untrained user will go ahead and grab a kilometer out, and then it might be in a basin, so you'll have things projecting above the ground level. I need to be used by trained hands, so to speak. It certainly is something to look at, I think possibly a logistical nightmare trying to make it standardised as a thing, but perhaps something to put out and get people's input, and some help with that would be a good thing, I think. Okay, thank you. Thank you very much.
|
In the highly competitive world of environmental consulting, being able to manage large volumes of data and deliver timely, accurate information based on that data is critical to our ongoing success. As a relatively small company, we recognized that we needed something unique to survive and prosper in an industry dominated by huge corporations. Over the past 7 years we have made a considerable effort to shift over to a FOSS4G environment, with a belief that, not only would this decision enhance what we already do well, but give us the competitive edge we would need to ensure future prosperity.A brief presentation of a snapshot of our current FOSS4G status, how we arrived here and a workflow tour beginning at the data acquisition stage looking at the feed through our patented EDMS QA/QC system into PostgreSQL followed by a demonstration of a just a few of our many custom web/mobile/desktop applications that rely on the PostgreSQL back end database and how these solutions are able to deliver accurate and timely information to employees and clients alike, and finally, where to next.We take advantage of multiple FOSS4G including the likes of OpenLayers, MapServer, PostgreSQL/PostGIS, PHP, D3 and jQuery. This combination places us in an ideal position to respond to client needs with the ability to rapidly deliver almost any request.
|
10.5446/31667 (DOI)
|
Hi, everybody. I'm Luca Delucci, working in a fundazzenet boom mak in Trento in Italy. It's a private company working more with public money. We are quite big, it's under the employer. And there is a research center where I work. I am working in the GIS group and the remote sensing aided by Markus Netteler. So before starting my presentation, I have some questions for you. Just a simple question. Do you, everybody, know what is modis? OK. Some of you already use pi modis. OK. And some of you was last year in nothing about the presentation of pi modis. No, OK. Because there is something that I already presented last year, and so could be some replication for you. So, what is pi modis? Pi modis is a Python library to work with modis data. It is, you cannot do analysis right now, but you can manage them, downloading, parsing the metadata, convert them to HDF format to other format or other system, a projection system. So, you can easily download the data from the NASA, and there is a repository for long period or only some time. You can really have a lot of chance to set up your download system and put it in a replication way. So, you can create easily some script and put in a cron job, and every day he is able to download the missing teatile, and so you can keep update your data set. Second, you can read the metadata file. Ič HDF file as an XML file to store information about the data, and with pi modis you can easily convert your XML data to a text file or what you need. So, you can easily download the data from the NASA, and you can easily do analysis. So, you can easily download the data from the NASA, and you can easily do analysis. But I'll show you in the next slide that we change a little bit the core. It's possible to repriject modis data from sinusoidal projection to all the GDAL supported projection system. Before with MRT it was a little bit limited. It was, I think, 10 or 15 projection systems supported, but now using GDAL we can convert with every kind of projection system. Obviously, we can also convert HDF format to other kind of format, like TIFF or PNG if you want to show only some picture. And in the next release we'll be able also to check the quality data. In the modis product there is a layer called KUAA, Quality Analysis Layer, and we can use this one to check what are the good pixel and what are the bad pixel. Objazda, as I told you before, PIMODI is a free and open source software, so we have some, I'm the main contributor, but there are other people that contribute in the past. Some of them had some specific topic, like the quality analysis, other only fix some of my bugs, because I'm a human person and I can make mistakes. And my English is not so good, so someone check all the documentation string in the code and fix them. And PIMODI is released with the GPL version 2 or IGR. A little bit of history of PIMODI. When I arrived in Fondacen and Mumak, Mark was already working with modis data, and he had a lot of batch script to download the data in a cron job, in an automatic way. And this was not really good choice, because it took a lot of time, batch was not so customized, and some operation was done more than once, because we have to parcel the, at that time it was an FTP server, so there was a lot of trouble. And I told him we can do something with Python, it should be faster than now, and we can also make these for everybody, because the script was really adapted for our infrastructure. So the first public release was only the Python version of the batch script of Mark was able only to download the data from the FTP server. But in 2011, I applied for Google Summer of Code for GRASJS to create a module called r.modis to download and import the data from the NASA server to GRAS. And to do that, I improved a lot of pymodis, I had the capability to mosaik in a reproject the data. So after from 2011 until a few months ago, there was a lot of improvement. The last version was released, I think last November, if I'm not wrong. Because there was some changes, NASA changed the FTP server with HATTP server, so I have to update our module to download. And also we have some improvement with the quality data and a lot of bug fix. Now, with the uncoming version that will be the 1.0.0, I jumped from the 0.7 to 1, because I can see that now the software is quite stable. There are a lot of new features, so I like to have a big change before until 0.7 version. One of the most important improvements was GDAL. My colleague Markoš Matz is working a lot with modis data, discovered a small bug in MRT software, so the data produced by MRT software are shifted of one pixel. So we discovered that all our data set was completely wrong and we had to reprocess again 13,000 maps. It was really a lot of work. He tried to use GDAL with some tools. He used GDAL-worthy to create the mosaic, and after GDAL worked to convert it to a TIFF file, and at the end import it in GrassJS. So I decided to add the capability of GDAL also in primordis, so using Python GDAL binding, I am able now to mosaic and reproject and convert all the HDF file in better way I think that MRT now. I have only one problem right now that it is for some layer is not so clear what is the null value, reading the metadata information, and so for some layer we are not able to provide the null value to the output right now. I hope to fix them in the future, maybe for the next release, but it's not so simple. Probably the user had to specify what is the null value for some layers. And the other improvement is that all the script that are provided by Pymodis are now with GUI, graphical user interface to help the people that don't want to run from command line or they usually fill the form and after run without any problem. The GUI are automatically created by Pymodis using VX Python. A lot of improvement was done also in the documentation. For each script we have some example and also for the library I created an IPython notebook so everybody has to, they can easily work with the library using IPython notebook just to run the command in the right sequence. And you can see what can do Pymodis. Here there are a little bit of stats about Pymodis. There are three active contributors, but all the contributors are seven. It's most in Python, there is some documentation that are written in REST. So I show you a little bit how could do the workflow or what we do like workflow. The first of it using ModisDownload.py is a script to download the data. In this example we have three tiles near Japan. And after with ModisMosaic we are able to mosaic them in only one data. You can see that Japan is not well recognized because it is in sinusoidal projection yet. But at the end with ModisConvert we can convert in, I think it is latitude-logitude system and you can recognize the shape of Japan now. The data are only the green and blue pixel, all the white one are cloud. So we have no data there. Because if we have the cloud, the satellite is not able to recognize the data. This is land surface temperature. So it is not able to recognize the temperature at the land, but it catches the temperature of the cloud that is really, really low. And we have to remove it because otherwise we make a lot of mess. Is that already processed out of the data for you as you downloaded it? Sorry? Is the cloud covered already? Yeah, there is the cool layer, it is able to save you if that pixel is good or not. But if you find some really low value, you can easily remove it. Because if you are minus 100 degrees you can say that, OK, this is not good data. Se najsquke z našega jaz varčenpo guitars. Las siege lower up the ipad for my kašLet z toga makriimi° Mix Mix Mix Neg� Mix to psupematics. Zielje tečte do sej zaj забundenju, to so so ev ali se obršveni, z kaj bela nebo zakoril, še lahko pod reached pred policy. To prekaz costs 30, Tegra Greenade zadoslice. czyli Wroot Eldo,チega z sh roulaw ozgradanja z 자�ak doma Pevs F כןos mukel. proti oblok storms advantagej. paired coll inspection and how in, Zdaj smo zelo vsega vsega in z njih smo inizirati občat, kaj je vsega hdf vsega. Zelo smo zelo vsega vsega vsega vsega vsega vsega, kaj je Jotif. Zelo smo vsega vsega, kaj je vsega vsega vsega. Vsega vsega ga se mau consecutive ID Ki Krati dovrati. Zelo smo vsega hdf vsega hdf vsega vsega hdf poritoj u z kog manju additions. Pr bitte. pridek na instičnosti. Naho včasne zo nada nek Australia. Jesh 😊 messenger je v VRT model nneeti zagalivljamo z caféu slepoj sl beanapt. Zato vahto Iitali prikrati v complaini I ili drugo klecto so je vкое. ali mi ga paja incorporatei in tu imemo malo daj za čen za dal. G,icias LSD data for Europe, LSD data for North America, NDI for Europe and some other products. The University College of Cork, that he, a guy from the university, created a quality layer library part and the script. It is used in Conai, that is the institute for, is like, and they say, hey, is the Argentinian space agency, and they are working with the primodists to make some, to download the data and after make some analysis about vector disease. This is the institute in Tokyo. I went in Japan three years ago, or three or two years ago, and I was hosted in Osaka to the Venkatesh professor, and he told me, please go to Tokyo because there is a people that he wants to speak with you. I didn't know anything, and he showed me some documentation that they provide to the Japan government to apply for a project, and they was using primodists and grass, and I was really, really happy to see that. There are some university students in USA that wrote me and they make some bug fix and other stuff, but I'm not really in touch with them, so I don't put the university because I don't know if it's a private work or a university one. I'm quite sure that someone else in the world, there are two here that I didn't know before, are using primodists in the world, and probably someone of you will be the next. Here there is some, I have to say thanks to this guy because they helped me in some way, and obviously from the time that Makri give us the opportunity to develop free open source software, primodist is one of the product, but our main business is developing Grass GIS, so they are really, we really appreciate that they give us the opportunity to develop free open source software. That's all, if you have any question, I'm here and also later. Thank you very much. APPLAUSE Maybe for the registration. First, thank you very much for your presentation. My question has to do with somebody, I'm not familiar with modist data, can you give us just a brief overview of what data is available, if it's just temperature data, what the size of the pixels are and what you use the data for. OK, there are really a lot of products from modist. We are using quite few, the lens surface temperature, the NDVI product, the VI product because there is NDVI and other indices, there is the snow coverage, and I don't remember all, but there are really a lot. I try to show you the pages, if you... And also the resolution of pixels are really different depending on what you are looking for. OK, this one. For example, these are some of the product, but there are other, in other website, there are really a lot of. And this is all the last return, so there is already analysed product, but there are some other that are coming from, directly from the satellite, and you have to use some specific tool, like modist swap to use them and to extract the data. So these are the final product coming from the satellite analysed by NASA. In you see, there are different resolution depending on the product. For example, we are... This is one of our best product, is the LST data for all the Europe, and we start from one kilometer resolution map of LST, with the all bring by the crowd, and with some analysed, we are able to rescale to 250 meters and to remove all the crowd. So we have a complete data set of temperature for Europe, and since last week, also for North America. And we are going to provide not all the data set, because they are more or less 20 terabyte of data, but some product, for example, the bio-cream, I don't know where, but there is something to where you can download the bio-cream and some other product or medium. So we are going to also produce some web services. We already have the WMS and WCS for the bio-cream data, and probably in the future we put out also other data. You are welcome. Do you think this platform could be used for other sensors, like Landsat? I don't know if there is some well-organized repository where we could take the data from, because the modest data are really in a good... The repository are well-organized, so you can find the data and all the tiles. So we have some correct name with the tile and everything. It could be possible also for Landsat, but I don't know if it's possible or not. I was just wondering if it's possible for you to download the data, not from the NASA website. In if I have a local receiving station for modest, and it is just putting data in a local FTP, is it possible to tweak it so that it just takes the data directly from the local FTP? If you have the same structure of NASA repository, yes, because you can set the URL and also the path where you are going, the important stuff that you created the directory with the year, month and day, divided by dot, and inside that you have the name more or less similar to the stuff. But if you have your own dataset and you provide the HDF file, you can use not the download, but only the parser or the converter and other tools. We need to download the data. We need to download because we are not able to... We have no instrument to receive the data from modest, but if we have it, we should not download the data. I was also just wondering if the pie mod is going to put into consideration what this data is now going to... The mission is almost getting over. I don't know, because they are starting to produce version 6. Now we are at version 5, and they have some plan to move to new version and reprocess all the data. If they stop the mission and the data will be not more available, we'll see. Thank you very much again.
|
One year after the first public presentation of pyModis at FOSS4G 2013 a lot of improvements have been implemented in the pyModis library. The most important news are that each command line tool now offers a graphical user interface to assist inexperienced users. Furthermore, the MODIS Reprojection Tool (MRT) is not longer mandatory in order to mosaic and reproject the original MODIS data as GDAL is now supported.Hence the most important improvement was the reimplementation of existing MRT component to use the Python binding of GDAL. This was basically driven by the fact that MRT does not properly perform geodetic datum transforms as discovered in the daily work with MODIS data within the PGIS-FEM group leading to shifted reprojection output. With the new GDAL support not only this problem has been solved but also the installation greatly simplified. pyModis is used all over the world in academic, governmental and private companies due to its powerful capabilities while keeping MODIS processing workflows as simple as possible.The presentation will start with a small introduction about pyModis and its components, the library and the tools. This part is followed by news about the latest pyModis release and indications about future developments.
|
10.5446/31668 (DOI)
|
Good morning, everyone. I'm Tim Kempesty. I work at the National Weather Survey, Biological Development Laboratory. Today, I'm going to talk a little bit about tuning open source GIS tools to support further data and rapidly changing rasters. And by rapidly changing rasters, I mean raster data that can change every hour and anything you may have rendered from previous versions are instantly obsolete. So my project is the National Digital Forecast Database. We assemble gridded forecasts which are prepared by 122 forecast offices across the country. And we mosaic them together and deliver them as grid files. A couple years ago, we wanted to put the NDFD maps on a click-and-drag interface and produce the images on demand. And the result was a WMS powered by open source and geospatial database. And it lets us do some interesting mashups like how many people are expecting more than the foot of snow or some other interesting things that you may have seen in Jonathan Wolf's EDD demonstration yesterday. A little bit more about the National Digital Forecast Database. We just went operational with two and a half kilometer resolution data over the contiguous United States. And that's about 3 million pixels per forecast. And ends up being about 12 megabytes if you put it all in floating point. We have hourly forecast through 36 hours, six hour lease out to seven days. And there are 11 elements that have this hourly data, temperature, dew point, parent temperature, relative humidity, that kind of thing. We also have additional tropical weather and severe weather grids. And all totaled, it's about a thousand different grids over the conus. And additional, we have some smaller grids for Alaska, Hawaii, Puerto Rico, Guam, the other regions. So all totaled, we have about 3,000 different rasters in the database at any one time. And every hour, any or all of them could be refreshed. So the challenge is with this, the first is our time constraints. If we're getting new forecasts every hour, we want to publish them in a timely fashion. We don't want to issue a brand new forecast and not have it available to the public for another 20 or 30 minutes. So we want to be able to publish it as quickly as possible. Delivering new images in real time. We don't really have time to precede an entire cache of all of our weather data. So if a user hits an uncached image, we'll have to render it as quickly as possible. And managing the cache tiles, we don't want to keep redrawing maps every five minutes if the data hasn't changed. And by contrast, we also don't want to deliver old images if the forecast has changed. So we can't just set a five-minute cache expiry and have that be good enough. And we've managed to solve all of these programs with our stack, these problems with our stack of Postgres, PostGIS, GDAL, MapServer, MapCache, and Memcached. So we'll start with a little bit of Postgres tuning because if Postgres isn't humming along, we're not going to get very far and we're going to be disappointed in our results. The beauty of our weather data is that we can recreate it from preexisting GRIB files. So that means we really don't have to care about consistency of the database. It's easy to reload and we don't really care if anything hits the disk in a timely fashion. We can keep it all in memory. So the first thing we do, we'll make efforts towards postponing all the disk activity that happens on Postgres. Secondly, we want to make sure the query planner is preparing, is preferring index scans to sequential scans. And lastly, we want to make sure we can handle all the requests we're getting from our MapServer. So these are all, these are some of the tuning parameters all in Postgres.conf that you'll want to hit. First, shared buffers. Ideally, we want shared buffers to be big enough to keep our entire database in memory. That way, we're not pushing anything out during our updates. If we don't have that, ideally just keep enough room of shared memory to account for all of our new rasters. And reminder, it doesn't have to be that big because the rasters end up in a compressed data format anyway. So it may be 12 megabytes for one raster in floating point, but inside the compressed page, it might take about a megabyte. So it might not be as huge as you, as you might think. Now lastly, the caveat is if you have a large amount of shared buffers, it actually degrades the drop table performance. When Postgres drops a table, it will scan through everything you have in shared buffers to see if it belongs to that table. So you can't actually have too much of a good thing, especially if we're adding and dropping hundreds or maybe thousands of tables which we are doing. We can't really scan through thousands of times through memory. It ends up taking a lot of time. F-sync parameter, that applies to the right-ahead transaction log. And that tells the database whether or not it should use F-sync to push the transaction log out to disk. Now if we turn that off, we get a pretty good performance benefit every time we're doing updates, but it also risks the database inconsistency if there's a system crash. Since our data, we can recreate it from group files. This doesn't really matter so much to us. So we can go ahead and take that performance benefit for the trade-off of the risk. Checkpoint segments and checkpoints time out. Now checkpoint is when a database takes everything that's dirty in the shared memory, and that is it's different from what exists on your disk and it pushes it back out to disk. If you increase these values, it will delay the IO activity and that will help everything, especially heavy database loads. That's going to run more quickly. Checkpoint segments, it defaults to three different 16 megabytes sections of the transaction log. And we can easily push that up to eight or more. We can write a lot of transactions and just wait for that to fill up, then push all the dirty buffers out to disk. Checkpoint time out default is five minutes. So that's if you haven't had a checkpoint within the last five minutes, we'll go ahead and start pushing all the written pages from memory back out to disk. Now since we're only expecting updates once every hour, we can get away with setting this as high as 60 minutes. We don't expect to have any new data in between that time. And a secondary benefit from setting it that high is Postgres will try to write all the shared memory in half the time it takes to get to the next checkpoint value. So if we're, we have a 16-minute interval between one checkpoint and the next, we're letting Postgres take half an hour to write all the things that we've just changed from memory out to disk. So that's spreading out the IO activity over time and it's going to help the database perform better while we're rendering our images. The effective cache size is the first one that's going to help the query planner select index scans. This is really just an estimate of how much memory your system will be able to use for the database between the shared buffers and the operating systems disk cache. So the question is, will your data be in memory? If a query planner assumes your data is going to be in memory, it feels better about using an index scan. An index scan is more expensive on disk because it's a bunch of random accesses instead of a great big sequential all at once. So we're going to the effort of creating our spatial indexes. We want to make sure we're hitting them so this can be large. And we're probably on a big system anyway. We'll probably have several gigabytes of memory available. So that's going to be good to have that large. The next tuning parameter, random page cost and sequential page cost, these are actually arbitrary values. They don't mean anything specifically but combined. The ratio of them tells the query planner how much extra is going to cost to fetch indexes from disk randomly than a sequential scan from disk. And it defaults to four to one. If we set these between two and one, that's probably going to be good for most systems. It doesn't really take four times longer on most of our good disk systems to fetch an index and it does to fetch sequentially. But even these good values, it's not going to prevent some sequential scans from happening. The query planner may still decide a sequential scan is better. And if we know better, Postgres provides a couple of ways to avoid that. You can set some of the tuning parameters for the query planner either in your session or right in the function. So the yellow highlight right in the function, we set enable sequential scan to no. So now it's going to avoid the sequential scan at all costs and you're more likely to hit the index as we've created. Now, lastly for Postgres tuning are max connections. And the reason why we're tuning this is because MapServer with FastCGI compiled into it is going to hold open connections to the database but unfortunately it never quite reuses them. So when we're launching a FastCGI process and if it's allowed to process 100 images, it's going to create 100 different connections out to the database, keep them open, never reuse them and so they start stacking up. So we'll try to fix that in FastCGI a little bit later but right now we've run up to 1,000 connections and that's been good for us. But if you really need to run a lot more than that, you should consider using a PG pool or some other connection pooling software to help you out. So now we have a database that's pretty much ready to do our raster chores and we'll move on to tuning the display. So most of us here, I assume, we're using some form of Google Maps, a few are spherical, Mercator, 3857. And we decided that it was best to convert our data set to 3857 ahead of time before we started the drawing. MapServer will convert it for you. We can re-project on the fly but that's just going to take CPU time and if we avoid that, now we're delivering our images to the customer more quickly. So sure enough, here's our Lambert conformal map. I've had hardly any of our data is actually in 3857 by default. So it doesn't line up very well. And here's an example of what we're using for GDAL Warp, GDAL Info. That shows us our native projection source is our Lambert conformal and our target is the 3857. So what we do is we convert everything on a RAM drive to a floating point file just for simplicity's sake. Dash Multi here turns on a multi-threading conversion routine inside GDAL Warp but I'm not sure we've seen any real performance benefit to using that. I think we're running so many of these processes in parallel, it may work, it may not. But at this point, now our data is tuned to our display. It's lines up nicely and we think all is on the Dory. But then we get to tile images and our data tiling and our base maps. The base maps are delivered in 256 by 256 tiles normally from Google OpenStreetMap and in some cases it makes a lot of sense to tile our data set as well, especially for running a map server like this. In addition to image rendering, we also run queries against single points to plot forecast values. You may be able to see some of the temperatures plotted in that map. So those are queries against individual pixels and those happen to run a lot faster if we have smaller tiles. The index is more specific at that point but the image rendering runs better with larger tiles. So 256 by 256, it's kind of a nice compromise to get the best performance out of both worlds. And raster to PGSQL is a tool, it'll tile everything for us. That dash t highlighted gives us our 256 by 256 tiles and those will get loaded to the database. A couple of other options up there that are important. The dash capital Y uses copies instead of inserts and that's going to be a lot faster for bulk inserting of our data. And dash P is a new feature that used to be the default. It used to make all tiles the same size and it would add no data on either side of it. And new versions of raster to PGSQL, new versions of post-GIS, you have to add the dash P to put that padding back. And that lets you add constraints like regular blocking and that allows other GDAL routines and post-GIS raster routines to take a few shortcuts and to run a little more quickly. Now, we'll take a quick look at what GDAL info produced when we ran it with no other options. We have a pixel size of 3,114 meters and we'll take note of our left coordinate 14 million meters west. So this is what our tile data set looks like. Without any other tweaking, that's what we get. And the red boxes represent our 256 tiles. And the problem is this isn't aligned at all with the resolution of the base map. So if I have blue boxes to represent image tiles from the near assume level that we're going to draw for 3857, that's a mess. So I noticed a problem when we were looking at network throughput between the database and our map server. We saw upwards of 100 megabytes going across the wire to draw just one map. And even if we're just sending raw floats across, it shouldn't be much more than 25 megabytes. So here's why this is happening. This blue is one of our image tiles. And it's intersecting with four different data tiles that we've just stored in our database. So to draw that, all four of those tiles have to come across the wire to draw just that one little tiny blue box. And we'll take a look at this tile in the center of the country. That red box intersects with nine different image tiles. So all nine of those images are going to request that tile and send it across the wire. So we're quickly approaching an order of magnitude problem if we don't align our data tiles to what we're requesting on the map. So this, we decided to align everything to the base map. So here are the resolutions. We can, we took those right out of map cache.xml. They're way too much precision, but. So we'll take the resolution and Zoom 6 is the closest one. It's 2445. That's the closest one we have to what GDAL produced. And the 256 by 256 tile, that's the size, that's how many meters across that whole tile is. So if we divide our extent, our 14 million meters west by that value, we're a little more than 23 tiles west of the, of the primary in. So we have to make our extent large enough to cover 24 boxes. And that 15 million number on the bottom right is our new west extent when we warp our data. So here's a new GDAL warp command. TE is for our target extent and TS is for our target size. So we know how many of our tiles we're creating, so that's going to line up perfectly. GDAL warp also has an option for target resolution, but I found that if I just give it the extent and the size of the map that we want, that gives us a little more precise result than just trying to say, I want the 2445. So now everything's nice aligned, but when we do the warping and stretching, it introduces a few extra node data areas. So if we have entire rows or columns of node data, we can just snip them off by adjusting the targets' extents and the sizes accordingly. So that whole northern stretch, we can get rid of that. And that whole eastern stretch, we can get rid of that one too. We can almost get rid of the western one, but there's that little tiny speck of data, so we'll keep that in. So now we're all aligned. Everything is good, right? Not quite. So as it turns out, we made the problem worse by doing this because technically all of the boxes that surround that extent, they technically intersect. They have the same line, but fortunately there's an easy way to fix that, and we fix that in the map files data section. And the part I've highlighted is the raster's bounding box intersecting with a little shrunken envelope. We get the bounding box from our image request, from our WMS request, and the shrink envelope is just taking those coordinates and bringing them in by two meters each. And we can get away with doing that. It might be Clujie, but we can get away with that because we know our tiles are lined up with images that we're requesting. And so now it makes sure that we only grab that one single box instead of all of the tiles around it. So even by getting slightly larger data sets by expanding our resolution and our extents, we saw our network bandwidth between the database server and the map server go down by 60 to 75 percent just by lining up the data tiles to what we're requesting in images. And now for the rapidly changing weather data, we have, we keep track of two time stamps for this. One is our valid time, which is temperatures valid at noon today, noon tomorrow, and that's our what's what time the forecast is valid for. And when we're raster drawing, we really only want to select one valid time at a time. Issue in time, what time was the forecast prepared? We want the most recent one at all times. So the database is not really going to care so much about the issuance time, but it will be important in the image cache, in the tile cache. So valid time is actually ideally suited to using Postgres' inheritance. So instead of raster bands, we use parent tables and child tables. And you can think of it almost like you would a map cabinet. So the parent table represents our full collection of the most recent maps for one element, temperature per se. And each individual valid time will live in one of the child tables and one of the drawers. Typically, we keep the parent table empty and all of the actual data is going to go into the child tables. When you want to add a new forecast to the parent table, all we do is create a new table, load it, and then it's a simple alter table, inherit, to add it to the collection. And if we have an old forecast, we do alter table, no inherit, to take it out of the collection. And this happens to give us a couple of benefits. It's a lot faster than drop table. So we, all we do is mark it to be dropped later. So when we're no longer doing any of our important work, then we can start a separate process to start dropping all the tables that we took out of our official table. And there's the sequel for it. It's pretty easy. It uses a, no, I missed one. I'll go back. Now, the constraint exclusion, I forgot to talk about that. It's essentially a query optimization that instantly narrows in which child table has the data that you want. And it uses a check constraint to do that. And this is the query for it, query. So all we have in the green highlight is the valid time is now, now has a check constraint on it. So we guarantee that everything in that table has that valid time. And in one transaction, we inherit the new table and then no inherit the old table. And importantly, we didn't use an update on the parent table. We didn't use a delete on the parent table. So we don't have any of those costing us. And we didn't have to rewrite any indexes, which is also a time consuming process. So in the WMS query, we just add one dimension to get our layer request. Valid time and the map file essentially stays the same. We're always querying out of the parent table. We just put the valid time in the where clause and it knows exactly to go right for that drawer. So if we have something like 70 different tables, if we're not using the constraint exclusion, we'll query all of those tables and the moment you turn it on, there we have it. We're only hitting one table. So next, we get to work with the issuance time. Now, we put this as a dimension in map cache and there's the example of exactly how we did it. And this helps us to deliver only our newest maps, but it introduces two different problems. First, we need to use the user to request exactly the correct issuance time out of map cache and he doesn't really know what that's going to be. And also introduces an opportunity for cache poisoning. So imagine a scenario where you have a malicious user and he requests a map with an issuance time in the future and that image isn't there yet. So it sends a request back to the map server and map server draws the image, but it doesn't care that the issuance, about the issuance time. It just draws whatever is there and returns it back to the cache. So now, map cache has this image stamped with a date in the future. When you get a forecast that's actually issued at that time, now it's going to return you your old image. So we don't want to let that happen. So our solution was to put a little PHP script in the middle, which doesn't let the user request an issuance time. Instead, it finds it for the correct one from the database itself and will store that in our memcache as well and we'll give it an expiration time of about one minute. So the idea there, anything that hasn't changed, we can set all our image tiles as long as we want. And if anything is a new forecast, within one minute, we're going to find the correct issuance time. And one last thing from the map server, these are the fast CGI parameters that map server has. The first one we want to look at is max request per process. Now, we want to limit this one because this is the one that's holding open all the database connections. So ideally, we would have map server reusing the database connections but it doesn't. So under heavy load, we might run up against Postgres' max connections. So we tell this CGI process to exit after handling a relatively smallish number of requests, say 60 or so, and limit to how many of these we can launch at one time. And the second one we're going to hit is the termination score. This is a responding throttle which has noble intentions. If the program is repeatedly exiting and responding, something is probably wrong with your program and something's probably not good. So every exit and launch adds to a running score inside fast CGI. And if you exceed it by launching too many processes, you have to wait a period of time before Apache will allow another one to launch. And if you're doing something like seeding a tile cache and we're limiting how many connections we're allowed to have map server run, we're going to start running into this limit. So the idea here is to set the termination score to minus one and that's going to turn it off and that's going to allow map server to keep relaunching itself without any delays. So these are a couple of places where you can see this back end in action. The first is our new graphical user interface for NDFD and the second one is the EDD which Jonathan Wolf demonstrated yesterday. That's it. Any questions? Hi. I'm wondering with all the rights that you're doing, I know you're doing them in bulk. Does auto vacuum keep your database size under control or do you have to do any manual vacuuming? We never really run into vacuuming problems because we're not updating any tables, we're not deleting rows from any tables. What we're doing is we're just creating new tables and dropping new tables. So auto vacuum never really comes into play. Everything is handled by the checkpoints and the tables never grow. Have you considered storing files just on disk as opposed to in the database or is there any particular reason why they need to be in the database? Just curious to see if you did research the alternative. We looked at that. One of the benefits that I like from having the data right in the database is being able to run the fun geospatial queries against what we have in NDFD. So that's going to be the idea behind. Let's draw a line or let's draw a polygon and see what's going on inside there. Let's analyze what's happening with it and I think that's we're just better off doing that inside the database. Or none of the things just in terms of like performance of serving it out. Have you done any comparisons? No, we don't have any comparisons like that. So are you serving these maps as a WMS under your interface? Right, it is a WMS. We have underneath this, the first one, the graphical one, there's a WMS.php and that's what the user can use. So did you look at any maybe time permitting solutions that will help you serve your data quicker and also what process did you use since you said you're getting data pretty rapidly, right? Yes. So how are you keeping up with the processing so that it's ready in time so that, you know, it replaces the cache that is old, which is, you know, I don't know, a minute old or something? Well, any cache that's a minute old, what's happening in WMS.php is we're setting a new issuance time. We're fetching the new issuance time that we have from the database. So then our map cache dimension has that new issuance time and we're not actually requesting an old image that's sitting in cache. Your other question was the first one. Do you repeat that? Any time permitting solutions that's possible? I looked at using the map cache cedar, but at the moment we don't really know which, we don't have a good idea of which ones we should be ceding. We don't have. Those are being drawn on the fly. And as the user requests them, then they go into the cache. Well, they drop pretty quickly, so. Our whole database ends up being a little over 3 gigabytes, 3 to 4 gigabytes when it's all compressed. Would you ever serve up old forecasts? Would I serve up old forecasts? Yes. I mean, obviously that's. I mean, I guess if people are interested in like how was it, you know, 12 hours ago. That's actually one of the things I'm interested in too is how the forecasts change over time. So, you know, we issued a forecast for high temperature for Thursday. How has that progressed? What we could do with this is as we're applying the no inherit, we can take it out of what's official and start storing the old forecasts. So we could do that. And I'm interested in doing that. Yeah, perhaps just for your own internal consumption. For our internal consumption, yeah, we could do that, but we don't do that in this right now. Okay. But it's one of the things that I would definitely like to put into it. Thank you. Why what.
|
The National Weather Service is developing several geospatial forecast visualization and analysis tools. The back end data store and WMS server is built on Open Source GIS tools: GDAL, PostGIS / Raster, Mapserver, and Mapcache.Weather forecasts are in a constant state of flux. In the case of the National Digital Forecast Database, forecasts expire or are superseded every hour. This presents several challenges when it comes to managing forecast rasters with GIS tools, and delivering the most up-to-date, real-time forecasts with an acceptable level of performance. This presentation will examine the methods and practices we've used to optimize our data store performance, from data ingest to forecast analysis to image delivery.* Using PostgreSQL Inheritance / Parent and Child tables to manage raster updates inside the database* Managing an up-to-date image cache in Mapcache and Memcached, with rapidly changing source data.* Optimizing PostGIS raster tiles and Mapserver DATA queries for faster image generation and display over Google Maps* Future work: Expanding PostgreSQL Inheritance to work with raster overviews
|
10.5446/31669 (DOI)
|
The talks are up on the web. It's just talks.thestevesero.com. This one will be under spatial. That's always up. I like audiences to participate, so I have gifts for people who ask questions. 4GB USB metal bottle opener. Not just a USB key, but a bottle opener. And it works really well. One of our developer evangelists has used this for doing nothing but opening 500 bottles of beer and never transfer the file. And then if you want, there's books. Those are also available up in the booth. This is me. I'm Steve. This is Steve Zero. It's where you find me. Let's go to spatial. Oh, come on. Did I not put it in? Oh, no, I put it in other. Yeah, because it's vert, it's vert. I think it's in other. Because it's vertX. Web sockets for the rest. That makes me sad. But this is the good part about vertX. Reveal? Yes. I'm going to just use the local copy and by the end I will put it up. See there? Oh, that's why. The link is wrong. I forgot to put an X in there. I deleted too much. So I go to this and then I just sit. Okay, if someone wants to teach me how to move off of GoDaddy with their stupid domain forwarding policies where they do this, anyway, vert, vert. I'll fix it later. Oh, no, so there is no wireless, but there's this wireless so I can walk. Nice. Thank you. Okay, so. Will it get feedback if I do this? Metal. So that's not there right now, but I'll add that later and put it back up. Okay. So I called this Web sockets for the rest of us. If you want to harass me for being unprepared right at the very beginning, you can do that to me right here on Twitter. Sorry I didn't come to the other talks. The demo that I was supposed to run, the API provider locked my key yesterday or when I went to go run the sample, yesterday it was locked. I spent all last night trying to get it unlocked and then I gave up this morning so I spent my morning porting it to another API. So that's why I was not here. I spent the last hour porting this. All right. So I'm going to talk about some awesome sauce today because I think Vertex actually, how many of you do Java development? Good. How many of you think that sometimes it's a way and how many of you have done like Python development or some other language development and how nice did it feel when you did that other development about how fast and easy things were and you didn't have to worry about jar files and you didn't have like namespace collision and all that other stuff. Did you kind of like that and how easy it was to deploy stuff? This is what Vertex gives us. It's kind of like Node.js for the rest of us. Kind of like Festivus. Only it's Node.js for the rest of us. Okay. So I'm going to teach you not even really a lot about Paz. How many of you know what platform as a service is? Okay. So can I skip it for those who don't and just come by the booth and I'll talk to you about it? Is that okay? Okay. It's just a really easy way to spin stuff up without having to be like an AMI. Then I'm going to talk about Vertex and then we're going to watch some application goodness. And this is what we're going to have at the end. So you can go to this right now, Bitly, Vertbus. It's in leaflet. So if you are on the web, stop checking your email or tweeting. You're going to run your phone. This will work on your phone because it's using leaflet. I like having this just like a point. See? Clock someone on the head. Bitly, Vertbus. And what you'll see, hopefully, is this. So you should really recognize this as Jason Denzak's stuff that he's done with the Chattanooga bus system. Yeah. Except that we can do you. This is what type of important thing. Did you hear me? You want me to do that? I thought I was loud enough that even this mic would pick me up. But I guess. The guys are even gulping and you don't know what that meant. What? The guys on the other end of the screen. Oh, you're getting a tweet that's saying they can't hear me? Is that what, or somebody, whatever, whatever. I'm on the microphone. Let's do this. All right. So what you're seeing here is Jason has nicely taken, there's some proprietary provider that gives the bus feeds for Chattanooga. He has re-exposed that as a nice little JSON API. And what I'm doing here is I'm using WebSockets to push the positions of the buses back out to the browser. And it should be working on everybody else's browser. If anybody else has it up, do you see the points moving as well? Right. So. Okay. But so that's basically WebSockets. And I'm going to, how many of you have heard of WebSockets and thought, oh my God, that shit is hard. I don't want to touch that. Right. You're like, oh, someday when I have like a week, I'll sit down and figure out how to do WebSockets. And I'm going to show you how easy it is to do WebSockets today. Right. At least with Verdex. And that's why I was so, because this has been like my dream app. For you, for those of you who've been like doing Web app development since like 2000. It was like a real cool. And it was always like always just one step ahead. And now I can actually build it. I was so excited. All right. So let's go back to Verdex. How many of you have heard of Verdex? Only like four or five. Okay. This is the part I'm skipping. So Verdex. So Verdex is built on the JVM. And it's built after Tomcat, right, way after Tomcat. So it understands the idea that the JVM is actually polyglot. That there's a lot of different languages that run on the JVM. So basically any language that runs on the JVM, you can run inside of Verdex. You can run JVM, you can run Closure, you can run JavaScript, because the JVM now actually runs JavaScript as well. They'll be using Nashorn in their latest version. It also runs Jython. It runs anything that runs on the JVM. I'll give you some caveats about Jython in a little bit. It's both asynchronous and synchronous. So Node is just asynchronous. Verdex can be asynchronous and synchronous. And so asynchronous means you can fire off a request to it and then wait for the callback. But sometimes when we write code, we want stuff that's actually not going to come back until it's done. Right? Like we want actually want to tie up a thread. And you don't want to put that on your asynchronous thread. And I'll show you how they do synchronous in a little bit. It's got non-blocking I.O. So it's built on top of Netty. How many of you have heard of Netty? That's what Twitter uses to run all their really cool, fast stuff after the fail-wail. Right? Like after all the Ruby on Rails stuff started failing, they moved to Java and Netty and now things are doing much better. Right? And it's got an event bus and WebSock is built right in. Right? So you don't actually have to set up something like what's another event bus? Rabbit, MQ, or active MQ. You don't have to set up. It's built right in. Which is great for the... And it's got tons. It's built like within the last two or three years. So it's built understanding how we build Web Apps today. So it's got a lot of nice utility functions built in which I'm going to show. Or at least some of them. So let's talk about the architecture. Because it's very different. I'm assuming since almost everybody raised their hands for Java, you've all used Tomcat or Jetty or some sort of application server like that. This is not the case. So what you do is you write something. Here's the JVM. Here's the vertex container. And inside of that you write what's called a vertical. Can you guys see that in the back? The little text? Really? You must be really young all the way in the back. So a vertical is a single thread. Right? And it has its own class loader. And it can be one file of code. You'll see I'm going to spin up a couple verticals today and it's going to be just a single file. So like my Java file is a Java file. It's not packaged up into a jar and then it's got all these dependencies built and blah blah blah. It's just a Java file that just vertex spins up and runs. And that's it. Right? So you can write individual files and run them. Here we have the... We have a worker vertical and that's the synchronous one. Right? So what you can do with the worker vertical is it also has an isolated class loader. So this is again shared nothing between verticals. Which is key because sometimes when you have like the web app has one jar file in it, right? The SERP, like Tomcat has one jar. Your web application has a jar. You start running into configuration problems and all that. These are isolated class loaders. Right? And so you don't run it. Whatever this calls only is accessible to this. Okay? And then this one has a thread pool. The worker one has a thread pool that it uses for synchronous jobs. So it doesn't tie up the asynchronous ones. Everybody good on that so far? Because we're going to keep building on this. So if you don't get this, you're going to get lost in the next one. Any questions? Remember there's a USB stick in it for you. Yes, good. You don't. Not allowed. You're not allowed to share libraries. You're bringing in three different versions of Guava. I mean three versions, three guavas. Okay? Oh yeah. So if you want to share... Should I hold it that close or is it better back here? Back close or not? Yes. Okay. So the question was how do I share libraries if I want to? And the answer is you don't. You're not allowed. I saw another hand. Yeah. It actually gets compiled on the server. Does it? Well, so I'm using Maven when I build it. And I'm putting up class files. But I don't... In a normal version, yeah, usually you use Maven to build and deploy your files. So... But Python never gets compiled on the server. Yeah. I don't know the... Don't quote me on that, though. I'll have to look that up again. No. You can write a whole set of files. Yeah, you can actually... A vertical can be a jar. Okay. That's a whole bunch of Java files talking to each other. You're welcome. Any other questions? So this is really a chip OSGI. Yeah. It's very similar to OSGI. Yes. Except much more lightweight and I don't have to learn all that stuff that goes with OSGI. So you're taking all the fun out of it. What? I'm taking all the fun out of it? No. Fun for me is simple. I agree with Vladimir about simple being the hardest thing to do and when you do it right, it's amazing because it's so much easier to write stuff. So OSGI is not fun to me. All right. Any other questions? Okay. So then, I didn't get the question of, well, suppose I want something... Suppose I want verticals to talk to each other. How does that happen? Because remember, there's shared nothing between verticals. It comes with an event bus built in. So if you want your vertical to talk to another vertical, you throw something onto the event bus. Right? And then that, by default, uses JSON. So you throw JSON onto the event bus, whoever subscribes to whatever channel you publish on gets the response whenever the event bus gives it to it. Does that make sense? Right? And then, this can actually talk across modules as well. Right? You can publish event buses across modules. Remember how I'm going to show you a module today? A module can be many different verticals. It can be one vertical with many different Java files. A module is kind of a logical grouping of part of an application. Okay? That event bus, which you can't really see very well on this chart, goes all the way out to the browser. So that's all you do to basically get web sockets is throw it onto the event bus, put like four lines of JavaScript in your web page. Oh, throw it onto the event bus. Tell VertX you want to allow outbound or inbound connections on that message queue. Because it's, by default, everything shut off. So you have to specify separately inbound and outbound. Inbound and outbound, yeah. And you just throw it on the bus and as long as the browser subscribes to that same channel, it'll pick it up. And that's what I did today and I'll show you. The other thing is it's got this whole other idea that you can just have a pump that goes directly into a database. So you throw it onto the message bus, VertX will then pump that directly into the database as a record without you having to write anything specific. Right? So that means you can actually have database developers push stuff in without you having to write special adapter code. You just put it on the event bus and it just gets pumped in. Yeah. So other than the database and the web, are there, can you have other external listeners that are listening to this event bus? I don't know. I think you probably can. It's just using Hazelcast to find other people that it should be talking to when it starts up. But I don't know for sure. It's a good thing to look up. The point of today is not to be VertX experts when you leave the room because I'm obviously not one. But the point is to get you excited and then you go look it up. So I'm just going to push that ball right along. I'm just the, I just, well, I'm not going to say. Go ahead. Yeah. So this database pump, is that like just the document store or is it actually a real thing? It could be Postgres, it could be Mongo, it could be whatever you want it to be. Anything in the storage space? Yeah. I don't know what it does with the Postgres one, but the most common use case is usually Mongo, right? Where you're just throwing it. So it's not actually like partially based on putting it into a particular database? Again, I haven't played with it. By the time of, where am I giving the talk? I think it might be at Java one. By that time I'll have the answer to that question because I'll expand the demo to pump into Mongo at the same time. There it is. Thank you. One other question. Oh, you already got a key. Okay. Because he's not going to know the answer anyway. Why even bother asking? All right. Okay. And so then one last piece we're going to build up here is you can actually, VertX knows how to talk between different VertX instances over that message bus. Right? So if you set it up to do clustering, which is very easy to do, you just set a flag and it clusters. When a new VertX instance comes in, it looks for other VertX instances around that it should share a message bus with. These in non-open shift instances, if you run this on your own, these can actually be two completely different set of verticals running in here. So they're, and they just talk, start talking to, there's a red line in here. That's the message bus connecting them all up. Okay. In open shift, because of the way we do auto scaling, when you scale this up, this will probably be a mirror of the same thing. Okay. But if you run it yourself, you can do it just talking to any other VertX instance that comes up. You can just throw it on the message bus and the other one will get it. Any questions? So that's it. Yeah. In the back. Now I'm not going to, I usually throw the key. Yeah. Good. You, okay. And yeah. You a little, okay. Ready? Yeah. Go. How about the question, if it crashes on the VertX, does it take down the... No. It's all, so the question was, if a vertical crashes, does it take down the entire VertX? And the answer is no. It just takes down that vertical. I'll just say you're second-vert. But in an access, is it any non-program? In what sense do you mean that? Yeah. Yes. You could put JD... So the question was, can you access external resources in the same way that any Java file could do, any Java application can do it? Yeah. You could do JDBC if you wanted to. Yeah. Next question. And if I, wait, moderator, if I forget to repeat the question, like, either throw something or raise your hand. Yeah. Go ahead. What is the security control? There's a documentation that you read and it tells you how. The question was, how do you do the security? I'm just an evangelist. What do I care about security? I don't write production apps. Once I finally write a production app, I'll get back to you on that one. There's tons of security in there. There's a lot of... There are companies already adopting VertX and I'm sure they're taking care of that security stuff. So, yeah. So, is the vertical sandbox in any way? Or... Sandbox... So the question is, is the vertical sandbox in any way? What do you mean by sandbox? You could use it somehow. I know. This is for him. Can you use any of the......in access files with them? But, you know, like App Engine? Oh, no. I don't think it... Like App Engine? Like, you know, App Engine... Yeah, yeah, yeah. App Engine has all that... No, it's not like that at all. It's... You're running a Java process. Right? So whatever Java can do, it can do. Same thing for JRuby or any of the other things running in her closure or JavaScript. Any other questions? Yeah. Can you talk a little bit about extending the Event Bus to an Android client? And if there's any limitation, just compare it to browser? I could if I knew something about it. The question was, can you extend it to a... My answer would... Can you extend the Event Bus to talk to an Android client? Can Android... I don't know enough Android with modern Android to know, can it talk WebSockets? Then that... However you would have it talk to WebSockets, that's how you would have it talk to this, if you wanted to do it that way. Otherwise, you would build a REST interface and do REST calls. Okay? All right. I want to... I'm probably not going to get to show you the code because we're running out of time. So GitHub's on... Vertex is on OpenShift, so if you don't want to install it, it's pretty easy to install on your own machine. But if you want to actually install it and run it in the cloud, it's pretty easy and there's instructions. You just say, R-H-C-App, create my app, I want Vertex, and it spins up Vertex running on the Web the whole thing. I've got my Quick Start, which I'm going to show you. So there's a couple of differences. Come back to the slides later if you want. How am I doing? I've got only eight minutes left, so there's other stuff I want to cover. Too many questions. So the scenario is we're building a bus tracking system for Chattanooga. The original scenario was we're building a flight tracking system. But my API provider has locked that key, so we're not going to watch that demo. We're going to do the bus demo, which I rewrote this morning. And so Jason Denizak gave us this nice API. Hold on. Right? So he has a server-side event stream. I'm not using that. What I'm doing is I'm making a request every two seconds to this GeoJSON feed. Okay? And parsing it and then making the map. So that's the data behind the map. Okay. So here's the code. So the first thing is we need to start up a web server, right? So I... Where is the... Where's my dock configure? Sorry? You have a question? No, no. Just snide remarks about my... Heckling. Heckling. That's good. I like that. That keeps me in my toes. I don't have it here, so let me show it here. So inside this configuration directory... There's a file. And what I'm saying is I'm going to use this module. See that line right that I'm on that's blinking and it's kind of off-white? That's the module. So we're going to use a module to start this app. So when the convertx starts up, this is the module we're going to start by default. Okay? And that's it. This is what's a little bit different between... You use a configuration file when you're on OpenShift. That part is different. So we're going to start that module. Okay, so what's inside that module? There is... Where is it? Where is it? These are all commented out. So this is not what I want to do. Sorry. I was flopping around this morning, so I've forgotten where it starts. Isn't this fun to watch me do this? This is what it actually starts... I forget where I actually declare that this is the main vertical. It's in one of my files, but I forgot where. And I don't want to have you watch me flop around. This is the main vertical that starts everything up. Okay? I'm saying app.js, fire that up. So this is a JavaScript, and it's a vertical. So this is what a vertical looks like in JavaScript. Isn't it exciting? It looks like just basically normal JavaScript. This file said... The reason I'm using this here is you could use the same exact application localhost or on OpenShift. Right? Which IP do we want to bind to and stuff? And then I'm saying deploy the module. This is our module. I'm grabbing those from above. It's called web. The index page is here. The bridge is true. That means I want web sockets. Here's where I say I'm allowing this Event Bus Channel open outbound only. Okay? And then I'm logging to the console. Hey, I actually started a web server. And then what I'm doing is I'm using this vertical to deploy other verticals. So I'm avoiding... This is a Java one, com.openshiftfeedgetter. And this is a Python one called Flight Publisher, which if I had had more time would have been called Bus Publisher. So everywhere you see flight, think transit, and then every time you see plane, think bus. Okay? So this is the vertical I'm using here. Okay? So let's take a look at the... The main one that gets fired up first actually is the Java one. And it's not because it gets fired up first. It's the one that's actually doing the action that I want to look at it next. So if we go back here... Okay. Don't die on me now, GitHub. Okay. So let's actually go back to my local machine. And we're going to go to mods. No, that's not what I want. Sorry. Okay. This one's not... Oh, there's the configurator. Why is it not... Does anybody... Oh, it just was taking... It wasn't spinning though, was it? There we go. Okay. So the Java vertical. Feed getter. Normal Java imports. Okay. So basically what you do is to make it a vertical, you say extends vertical. That's pretty complicated right there. Right? And then you override the start method. So when this vertical starts up, what do you want it to do? So here, we're going to set a couple of variables. We're going to... Here's where I was talking about some of the convenience methods built in or convenience objects. It's got one called an HTTP client. It's got, as opposed to normal Java where we wouldn't have something as nice as this, there's actually a web client that can actually go out and make rest requests for us. Right? And the other one had, like, I can set SSL to true. I can do all sorts of the normal stuff. And notice that does the chaining that we like. Right? And then I'm setting keep alive. And then here I set the host, which is Jason's host. And then this is still in here. This ID and key is still in here if I was using the planes. And then I say, what buses... URL I want to grab and I want to grab slash buses. Then I print this out. And then here's another convenience method. Set periodic. So basically what this does is I'm saying I want to call this function every two seconds. I don't have to write any kind of weird threading code or any of that kind of stuff. I just say set periodic every two seconds. And then I've got, like, an anonymous interfunction, like we would expect in JavaScript as well. Right? This is a callback function. So every two seconds, call this. And then what it's doing is from our HTTP client, we're actually now going to do a handler for the body. And we're going to handle it. And all we're going to do is we've got back the data, right? That came back in the buffer, called two string on it. Now I've got a JSON object. And inside of there, you notice back in JSON, JSONs, not JSON, I just want to get rid of feature collection. Right? So I don't need all that stuff out there. I'm going to just grab the features attribute and iterate through it. And I'm just going to dump it back on the event bus. So here I do all JSON get field features, which is each of the buses. That's my flights array. And then I publish that. That's how you publish something on the event bus. VertX.eventbus.publish, whatever string you want to make up. And you put the data on it. And that just dumps the JSON right onto the event bus. And that's all this vertical does. And so this vertical basically starts up. And then every two seconds, it goes and queries in an Ajax manner, query or an asynchronous manner, queries that JSONs API pulls back the data and puts it back on the event bus. Yeah? Question. Does the event bus have a short history? To maintain what? Short history. So does the event bus have the capacity to maintain a short history? I don't know the exact answer to that. Basically what it is is just an event queue. So you throw things on and people subscribe and then they get stuff from it. I don't think they also have throttling in there either. Or what's it called to build up pressure? I forget the exact term in event bus terminology. Where the idea is you actually may want to build up pressure in the system before you start processing them because it's asynchronous in that way. I don't know if that's there now. It may be in the latest version. Yeah? You have to make a set of verdicts to do that? To make that into a pressure builder? Yeah, you could. And then have that publish out again or something? Like just save this up, save this up until there's an X number. Yeah, you could if you wanted to write it. People want it built right in though. That was the point. Oh yeah, the question is couldn't you use another verdict instance to do that? You could actually even use another vertical to do that, right? You don't need a whole other verdict instance. Yeah, question? Can you say anything about moments of the working versus? So much faster. It's like lightning fast. The thing is, so yeah, I've seen some on it, but it's all I've seen is from the writer of verdicts. And it blows note out of the water, right? And the reason why it blows note out of the water is because the JVM has been tuned for, what is it, 20 years now? And so there's a lot of tuning that's gone into making the JVM really lightning fast, NIO, NETI, all those people have been focusing on it for a while, and he's just building off of that. So they've shown it much faster. But the only benchmarks I've seen have been directly from somebody who is on the verdicts project. So the other thing that's a bit different though, and I don't, you know, I haven't been, the email list is so huge and I work on a bunch of different things. They were also talking, it doesn't have NPM. So you can't add JavaScript packages through NPM right now when I did this example. It may be now, the next version they were working on, they were like, okay, we need to do better package management for everybody because it doesn't do pip for Python either, right? So they needed to basically find a way to do better package management and declare their requirements. So yeah. The port 80 that I opened is if I'm in local host, I open 80. If I'm on open shift, I open whatever port open shift wants me to connect to. Or for what set request? The web sockets request and everything? Yeah, yeah, that'll happen straight over, it'll just put it over 80. The question was where do the web sockets request go, do they go over 80 or a different port and the answer is it goes over 80. All right. And so that's it. Oh no, that's it for the Java. That was easy. Let's go back and now look at who's going to consume that. And the one that's going to consume that is the Python file. So the big caveat I want to give, some of you said you were Python developers, the problem I want to tell you about Python right now is it's actually a Jython problem. So Jython was great in the beginning and now everybody kind of seems to have let it drop off the face of the earth. So Jython is still at 2.5, I think. Does anybody know that it's been moved to 2.7 yet? It's still, yeah? Any day now. Any day now. Any day now we're going to get 2.7 and Jython. So right. So like there is no JSON in this. JSON didn't come I think until 2.6. So there is no JSON in Jython 2.5. Which is sad. Very sad. So for right now I would not recommend doing a lot of Python work in Vertex. If you like Ruby or Clojure or Scala or Java or JavaScript, great. Python, meh. Use G event or Guna corner or something. So basically what I'm doing here is the first thing is when the only main thing this does is when it starts up, says event bus, register a handler on this channel. That was the channel we published on before. And then what I'm going to call when it happens is this function called handler. Okay? So every time that gets thrown on the event bus and this gets notified, it's going to call the function handler. And the function handler up above is really simple. It takes the data. It makes a new JSON array. This is where we have to get into this because of the older version of Python. And what we do is we get the message. That was what was the callback function. We get the body of it and we iterate through it. Which is that JSON array basically of flights. Now, of, well, buses. Each bus, right? Once I got rid of feature, what is it called? Feature collection. So once I get rid of that, there's just a bunch of features in there, right? And so this is basically saying, okay, give me each feature in the feature collection. And for each one, I make a new JSON object. Again, I have to do that because of jython25. But then in there, I put the speed, which is the, from the bus, get properties, get direction. I put the another string altitude, which is flightbus.getproperties.getroute. So which route number it is. And then I get the geometry.coordinates, right? So for all of us who know GeoJSON, that will basically just give me the array. So I got that array right there. I mean, I could have keep written that all out, but I wanted to be a little bit more explicit. And then for my flight, I put a number, lat and long, just from the position array. And now after I'm done with that, I add that object to all of the flights or all the buses. And then I publish that again on a different channel called flights updated. And I just throw the array on it. Yeah, question in the back? Is there a performance difference between the different languages? Yep, there definitely would be. You have to look at the performance of each of the different languages on the JVM. Okay? It's whatever the JVM basically does. And then, how many here have used leaflet? Most people? Okay, so I'm not going to really explain the leaflet code very much. So that's it. We're done with all our server side code. So that's, I think that's pretty simple code to write. One of the great things about Vertex is they have examples in almost every single language they support, which are cut and paste. Are we almost out of time? I mean, it was like six minutes over, but... But I'm the last one of the session? Maybe you're the last one. Oh, where's... Nobody gets up from their seat. Okay, so what I want to show then is the last piece of it, right? Which is, in here, we put the index.html. Right? And so the... First, I'm going to show the part which is how you get the WebSockets. New Vertex Event Bus. Window Locate. So basically, I'm giving the URL to what WebSocket I want to subscribe to. Right? This 8,000 is because OpenShift exposes our WebSockets on port 8,000 and 8,000, 4,4,3. If you're running Vertex by yourself, that would just be 80. Okay? And then, you always subscribe to the Event Bus. Right? So that's always the URL. So then I say, EB on Open. So when the Event Bus opens, register a handler. That's the channel. Remember that channel I put in my Python file? That's the channel. And then I have a callback. And the callback, this is me trying to figure out what's going on. And this is Pin the Map, which if you saw any of my other demos, that's what I always call it when I want to put pins on the map. And I pass an event. And then what Pin the Map does, I do all the normal setup. There's nothing special here in setting up leaflet. Okay? You can trust me on that. I'm completely above the board. And then, the thing that we're doing here is Pin the Map. So I made a marker layer group up there. I made a marker layer group to put all my pins in. And I remove it, first thing. Because I don't want to keep adding pins to the same marker layer group over and over again. So I remove it. I get an array. I iterate through that data array, which remember I just basically did altitude, direction, I think it was, or speed, and lat long. So I just iterate through that array and I just make a new marker. Plain lat, plain long. And then on the pop-up, I bind. And then I capture the route is the plane altitude. And the direction is cut off. But there is a direction. I don't know where it got cut off on the screen. Oh, no, I know why. There's two horizontal scroll bars. Plain speed is the direction. So if I get time, when I update that URL, I'll also update all the names in here as well. And then I just basically, now I've got to find the other scroll bar. Once I make the marker, I just add it to the marker array. And at the end, I add that marker array to the map in the layer group. Did you just use WebSocket so why do you have to use the perfect event bus? So I can register to where they're coming out from. Oh, yeah. Why did I use the event bus rather than using normal WebSockets? I have never seen an example that uses normal WebSockets and this example worked right out of the box. And this whole thing for me was, I don't want to frickin' learn WebSockets. That's way too much complication for me. This is what... In the browser though, it's like, it holds the exact same thing. But this falls back to SoxJS as well. This will fall back to long polling if WebSockets is not there. Yeah, see? Way better than WebSockets. It falls back. I like my IE7. All right. So that will fall back automatically if WebSockets is not available. Okay? And that's it. And so that's how we built the map that is this. Okay? Hopefully it's still running. Come on now! Come on, buses? They're all at lunch, and I wait. Did you see one move? Maybe I lost my WebSocket connection or refreshed the page. Watch, this is WebSockets. I keep refreshing the page and it keeps moving the points. Let's see if I lost there. That bus just moved over there. So you can tell also where the bus depot is. There's some over there, right? Did anybody who had a decent network connection actually stay up the whole time? And see the buses moving the whole time? No. Oh, yeah, you want to see that? No. There. There. When it moves, it removes the marker. So, yeah. I mean, this is the best UX ever. So I don't know why anybody would want to change this. But you might want to do things like, oh, I don't know, change the color for different bus lines or all that other fun stuff. But so that's it. So I think that was, for most of us, like that was like WebSockets that I could understand rather than having Matt understands all the other stuff. But for the rest of us, this was so incredibly easy for me, right? Like I just put stuff on the event bus and then write a file. So if I were to do this over, I'd probably write that Python thing either in Java again. Actually, I'd probably just put it all in one vertical. There's no reason to put it on two different verticals. It doesn't do either one's not doing much. Unless, so how many of you have heard of microservices? There's a few. So microservices is the stuff that Netflix is starting to do, where basically you're writing rest services and each one has a very small little function. You're not writing big monolithic applications anymore where they have a whole bunch of different rest APIs and it's doing users and it's doing your favorite movies and it's also doing all that stuff, right, all in one rest API. You're basically writing a Spotify does this as well. You write a bunch of small different little web services so that each team can iterate their service fast, right? They're not, I'm not dependent on the user service. The user service as long as they keep their JSON the same, I don't care. I can iterate my service as fast as I want. And so that's what this is perfect for doing microservices. Because each service could be a different vertical or a different module, right? And I can have those running and then just subscribe to the event bus. Yeah. Audience member 1, next one, David, could you give me a few minutes? Automatic clustering, unicorns. I can't think of anything off the top of my head. I mean, it basically, but it gives you the async, like as a Java developer getting asynchronous stuff is actually pretty hard to get out of the box when you're doing web development, right? How many of you have not done it because you didn't want to touch threading in Java? Right, I don't want to touch threading in Java. I mean, I know it's easy in Java, but I still get it wrong. So I love this because it just takes care of it all out of the box. And I can write Java rather than having to write JavaScript. I can access Lucene. I can access Poi. I can access all the Java libraries that I want and still get asynchronous and NIO and all the fun stuff without having to try to find a JavaScript library that may or may not do it. And it's only a year old, right? So, or facial recognition libraries. There's so much written in Java that it's just nice. Yeah. Is there anyone who wants to know something like any... Sorry, say that again? Any big clients that want to know something? The VertX page will have it on there. So, any other questions? Oh, the question was any big clients. The VertX page will have it. I work on this only occasionally. Yeah? How do you compare it to Grails? How it compares to Grails? It's Grails is more, you run Grails on something like Tomcat, right? So, Grails is just another web framework that runs on top of Tomcat. This is completely different. This is you throw Tomcat out the window. Right? So, you've changed the entire way you write web applications. Right? And the other thing that I would recommend with this is you get more into the... Server side only does data. Server side doesn't do rendering of HTML. This is perfect for... I'm going to set up REST APIs and publish a bunch of REST APIs and all those fancy people with Photoshop and Mac desktop machines and all that fun stuff. They get to write the nice UIs and I'm just going to publish data that I understand. So, and you just agree on the JSON contract. Yeah? And I'll get you. So, you're not monitoring the things out of the box? There's this part of the website that talks about documentation and then that might actually talk about monitoring. I don't know off the top of my head, sorry. So, the question was is there any monitoring out of the box? You could always plug your JVM, like the visual JVM and watch what it does, but I don't know off the top of my head. Yeah? Can you write verticals in groovy? In groovy? Yes. There is a plug-in, there's a module for groovy. Okay? Yeah? You can just go from Tomcat out the window. Is there any compatibility for... So, let's say you have some stuff that's already running on Tomcat. Will Vertex support that kind of stuff or do you need for a Tomcat or Vertex? So, the question is what's the kind of compatibility between Tomcat and verticals where things running in Vertex? You cannot drop a war file into Vertex and just have it run, right? I mean, you could still talk to your Tomcat instance that's running somewhere else, maybe build a REST API off of that and consume it inside of... inside of... from a vertical, but there's no, like, oh, it's basically... like, they're gonna go to Java 8. You know how Java 8 has already... Java 7 just came out, but Java 8's out now too, and Java 8 has closures built into the language. They're totally taking advantage of it in the new version of Vertex. The new version of Vertex will require Java 8, and it's gonna be using closures. The idea with Vertex is Tomcat and all those application servers have been around now for... 2014, so probably about 16 years now. It's time to freshen up again a bit, and so there it's clean... it's green field. It won't be compatible that way. The Java files themselves, like any of your business logic and all that stuff, fine, but the rest, no. Any other questions? All right. Thanks, everybody. There's other USB keys if you wanted to come get them.
|
You have started to hear about micro-services, evented asynch servers, and WebSockets but then you hear the only platform that really has those now is Node.JS. While you like JavScript you would like to use other languages. Well Vert.x has all these features AND runs JavaScript, Java, Scala, Python, Ruby, CoffeeScript, and Groovy. You don't have to be a Node.JS hipster to have all the fun - though JavaScript is fine if you roll that way. This talk will cover a basic introduction to Vert.x and it's architecture. Then I will will show how I built a WebSocket asset tracking application with Leaflet and a Vert.x backend application. The goal is at the end you can go home and start writing your own, scalable, (a)synchronous, WebSocket applications.
|
10.5446/31672 (DOI)
|
Hello to my presentation and also hello from Switzerland. That's the country where I departed three days ago. It's a country in the middle of Europe with lots of mountains which you can see in the footer. And even if you haven't been there, I'm sure you have heard of some of its products. Switzerland produces quite accurate watches or army knives with many helpful tools. Or maybe you've tried some Swiss cheese or eaten some Swiss chocolate. Another product of Switzerland is the Atlas of Switzerland. It's the national Atlas and that's where I'm working. It has been established by the Swiss cartographer Eduard Imhof at ETH Zurich in 1961. And it was first printed Atlas and since year 2000 it's digital. Next year we plan to release a new version compared to the previous version which was shipped as a DVD. It's now completely web based. Our main user interface is a virtual globe and it allows us to display our maps there. And also we can experiment a little bit with 3D cartography. Here are some expressions. So we have a variety of themes like nature and environment, economy, history. But we have also some themes which are global and put Switzerland in context of the world or at least of Europe. And our aim is to construct attractive maps but at the same time still readable so that we gain the interest of the map reader. The new version is built with open source technology. As virtual globe we use OSG-Earth and it supports many GIS formats and you can integrate custom digital elevation models. On top of it we have the Chromium embedded framework. We have the browser engine below the Chrome browser and we can develop our GUI visit based on current web technologies. That's why we can also include JavaScript libraries like D3 to style the maps, to create legends or to create charts. And that's what I want to focus in this presentation. So first about the styling. For this I modified the existing D3 scales and it doesn't take much of modification. Many translated very well to the geodame. So I just wrapped them in functions with different names. Only a few modifications were necessary. For example, for the thresholds there were the maximum value light inside the interval. Also I added a function for custom values if none of these scale fits. But this is rather the exception. In general the styling is based on the names and the indices of attributes. And as an extension to D3 it's possible to chain two domains and multiple ranges. Also in the geodame you have sometimes missing values or undefined values as inputs or even invalid values. And this is what we handle too. And lastly we have strings instead of areas for better readability and for faster editing. Here you can see an example of a styling function. So we have a static styling attribute which you can see here and a dynamic function. And the dynamic function specifies an attribute and some unique values as domain values. And these values are mapped some colors. And also a color in case the input value is undefined. So when we have a feature like this with a certain ID and an attribute of two, then the corresponding color is chosen which is this one. And this string is returned for OSG-Earth which looks like that, which combines the static and the dynamic components. The result might look like this. We have some extruded cells with different colors and a little bit of transparency. What we want now is the legend next to it and it nearly comes for free because we have wrapped the D3 scale function in it so we can construct the legend. Additionally, we need some metadata like translations or value maps to create the legend directions and labels. So here in case these are trees. The legend is aligned in a matrix-like structure and this allows us not only to display rectangles, but also lines and symbols. Here's an example of a symbol legend and the corresponding map looks like this. So these are some cable cars in Switzerland. There are some special cases for unique values. The first one is when you have some undefined values which do not have really a meaning. For example, for administrative units where you specify only the colors. So you put them in a group. The other case is when you have too many unique values, for example, for geology maps, this might be the case, then you put them also into groups. And when you click on such groups, then the unique values are displayed for this group. The next legend type is based on D3 threshold scale. And it basically specifies a value range and assigns a styling attribute. So here it's the color. And as I said before, the maximum value is included in the interval, in the last interval. And you can make a pipeline map with it, for example, where the pipelines have different sizes and colors. You can also combine unique and limit values. Here we have languages as unique values and the dominance of the language as limit values. The languages have different colors and the more intense the color is, the higher the dominance. So we can see in the map that a lot of German is spoken in Switzerland. And you can identify the communes where the percentage is quite high. The last legend type I would like to present is called interpolated values. And based on D3's linear scale, it interpolates the styling attribute. Here it's the color which was interpolated. But it can be also logarithmically and exponentially interpolated. In this map, the precipitation is visualized in millimeters per year. And the cells were extruded to strengthen the effect. So we have seen now some coroplast maps, some line maps and some symbol maps. But we can also construct some chart maps with D3. For this, I have implemented six common chart types, which you can see here. Some circular charts, which you see D3 arc function and also some rectangular charts, which are simply composed of SVG regs. So these charts can be placed into OSG-RSS SVG billboards. For this, the icon driver needs to be extended. And as we have only static maps, we can pre-render these charts with Node.js that the performance is better. How is the chart defined? You can see here it's similar to the styling function earlier. So we have again some static attributes, specifying this chart type and some dynamic parts, which show the color of segments and the height of the bar. So when we have a feature like this with certain attributes, then an SVG document is generated, which looks like that. So as all values are negative, the colors is in a reddish hue. And the smaller the values, the higher the bar length is. The map could look like this. So in this case, it's the annual length variation of glaciers in Switzerland. So it does not look very good because the glaciers are melting. So we better care about the climates that we have some glaciers left in 100 years. But back to the topic, the legend comes nearly for free as we used the previously styling function. And we have here some limit values for the legend. But you can also have some unique values. This is the electricity consumption in Europe with different sectors, with different colors. The bigger the ring chart is, the larger is the value. So this is an example of a European map and you can compare the value of Switzerland with other countries. So I think now it's high time for a demo. So this is our prototypical application and it's the geology map. And here's the legend which has been overlaid, the virtual globe. And you can now click on the categories to see the unique values. And I've inserted a little animation that looks a little bit nicer and quite smooth. And you can also zoom in to the virtual globe to get some more details and tilt it. And you see it's overlaid of the virtual globe. Another example is the precipitation map. It needs a bit of time to load. Yeah, here it is. And the cells are loaded in tiles. Maybe some of you have been to the talk yesterday. So OSG-Earth supports also some kind of vector tiling. And when you zoom in, the tiles get a bit faster loaded. And you can tilt the map and identify the areas where the precipitation is quite high. So you better don't forget the umbrella there. And now some chart maps. Here's the glacier map with the bar charts. And you can tilt them again. They are spilbots. And because we extended the icon driver, then when we tilt the globe, then the occlusion color is activated. So the charts are hidden. And also some decluttering. When charts are too close together, then they get a little bit smaller and more transparent. And another example is the H-structure in Europe with some divergent bar charts. So we can zoom to Switzerland and compare it with other countries like Germany and France. So it doesn't look that different, but it looks a bit more different when you compare it to African countries. So that's what it's possible. And maybe you already have identified some things which can be improved. So at the moment, we don't have legends for comparing sizes. But you can imagine, as Susanna Bleich described in her PhD thesis, to have a reference frame for the individual charts to compare them and to put them into relation. Also the tilting of charts can be improved. That we look right on the globes. Then we have a nicer 2D view and the billboards get activated when you tilt it. And also when you zoom in, the charts should get a little bit bigger. And when you zoom out, they should get a bit smaller. And lastly, we can imagine also to have some real 3D charts. But then we need probably another JavaScript library like 3JS to build these and put them into the virtual globe. So at that point, I want to thank the company behind OSG-Earth, which is Pelican Mapping. So they developed the virtual globe and also did the SEV integration. I want to thank Mike Bostock for developing D3 and all Chromium Embedded Framework developers and Chromium developers. I want to thank my Atlas of Switzerland team, especially Remo Eichenberger, who did the SVG integration in OSG-Earth. And finally, ETH Zurich for sponsoring my trip. So I hope you enjoyed this presentation and thanks for your attention. Thanks for the talk. Can you comment a little bit more about the Chromium desktop application and is D3 running in that or is it all server-side? The Chromium Embedded Framework is put in an open GL context on top of the virtual globe. And we have rendered the charts beforehand on server-side with no GS. So OSG-Earth had first the JavaScript of V8 engine integrated, but they swapped it to a different engine, which is called Duke Tape. But with the V8 engine, it was possible to dynamically create the charts in the virtual globe. Now, yeah, some bugs are cured, so we rendered them on the server-side. Thank you. Thank you. Do you have any provisions for or ideas about including interactive legends? I'm thinking mouseover or clicking on the items, because if you have 500 types of rock in your geology chart or a fluid scale for precipitations, I would want to get to the exact value of an item. Yeah, that was actually possible in the previous version. So when you have hovered over a polygon, then the little dot was displayed next to the label in the legend. So you could see the relation. So yeah, that might be possible, but at least we have also some feature picking. When you click on a polygon, then a little feature info appears and you can see the value like this. Any other questions? This might be obvious, but did you develop your 3D base map too? The base map, did you develop that? The base map, oh yeah. So we've integrated the custom digital elevation model and we've rendered the shaded relief. And yeah, that's also what was done, not me, but by the team of the Atlas of Switzerland. Hi, I was wondering if you're going to be distributing your source code for your application? Yeah, it's a bit difficult because our primary purpose is to develop this Atlas, so we want to focus on that. But we have also the plan to construct a so-called Atlas platform Switzerland where other interested parties can use this framework and use the code which I've presented today. So maybe in the near future it might be published in the web, but yeah, it takes a bit of effort to do that. Okay, thank you. Going back to what she was asking, was that map, the 3D map, what was it rendered in the client using what library was it using to render? Was it D3? Was that D3 or was that something else? The base map. Yeah. No, that wasn't D3, that's rendered by Austria. Okay, got it. Any other questions? All right, thank you.
|
This presentation introduces a mashup of the JavaScript library D3.js, the virtual globe toolkit osgEarth, and the web browser engine Chromium Embedded Framework. Using the example of a national atlas, it is demonstrated how these OpenSource frameworks facilitate the creation of charts and legends for a series of three-dimensional maps.First, it is explained how a map in osgEarth can be styled with D3 scale functions. Legends for choropleth, line, symbol, and grid maps are derived therefrom. They allow to depict unique values, value ranges, and combinations of those having two dependent variables. Also legend templates for color gradients and hierarchical categories have been developed. All legends are superimposed the virtual globe by means of the Chromium Embedded Framework.Next, six widely used chart types - i.e. pie, ring, wing, divided area, bar, and divergent bar charts - with individual properties are presented. Charts are defined with aforementioned styling functions, created with D3.js, and displayed as billboards in the virtual globe.Finally, a live demo is shown, current limitations are discussed and future work of the atlas project is outlined.
|
10.5446/31673 (DOI)
|
My name is Daniel, I'm with the Brazilian Federal Police and I'm a forensic examiner. And I'm going to speak to you today about the system we have there for geographic intelligence. And these are the topics we're going to cover. We're going to speak a little bit about the Brazilian Federal Police. We're going to speak about the system and we're going to spell out a few lessons or since Denver 2011, which was the last presentation delivered about the system in Phosphor G, about outsourcing's choice of technology, increasing user base, and increasing institutional awareness. We're also speaking a little bit about the roadmap and choices about the roadmap. And we're going to have the bounty hunt again that we had in Denver 2011. We're going to speak about it. The Brazilian Federal Police is like any other police. It deals in a lot of, handles a lot of problems, a lot of fronts. And we have a lot of attributions. It's like the FBI, the Coast Guard, the ATF, EPA, GAA, just like the Canadian Mounted Police that does everything. And we have, we're a small institution. We have about 14,000 employees. And of these, we have around 10,000 policemen, 100,000 forensic experts among these 10,000 policemen. The number is wrong. Sorry, I forgot a zero there. I was supposed to have written 10,000. And about 240 deal specifically with environmental crime. We have several areas. We have ballistics, chemical forensics for pesticides and drugs and a lot of stuff. We have medical, like the traditional CSI investigation. We have computer forensics. We have, this is a state scene. This is not an actual CSI scene. And the things we're going to do, the system was designed to provide information for civil engineering and environmental forensics, especially environmental forensics. One thinks about the GIS system for public security for law enforcement. One thinks about dispatching police cars and mapping the muggers in the city and closing escape routes and stuff like that. This system is different. It provides information for the forensics examiner. Forensics is paramount to avoiding impunity. So during a due processing law, when somebody is able to prove that the offense has actually been committed, that's a very strong incentive for the person to never commit another crime again. If he gets away with it, it doesn't do anything. It's even worse. And we had several sponsors from the beginning. So we had the Japan International Cooperation Agency that sponsored the project at the very beginning, the UN Office on Drug and Crime and FNEP, which is a research fund in Brazil. And the system started with several goals. The first one was to distribute ALUS images. ALUS is a satellite that was, that was, it's not operating anymore for the Japan Space Agency. And it provided great images of the Amazon, especially radar images. They see through clouds. So it's not called the rainforest for nothing. It rains a lot there and you can't see anything with a regular satellite through the rain. So we had a lot of radar images and we had a lot of some stretch goals. So not just the images. Since we have a system, let's try to publish the maps of all the work being done. So we can see hotspots and where people have more difficulty, where are the greatest demands in the field. And we would like also to publish support maps for environmental forensics because we are a national institution in Brazil. We have a small problem that sometimes the person that's going to work on the job is not from the region. He's from someplace 2000 kilometers away. So he doesn't know the region and he doesn't have the maps with him. He doesn't know the source of the maps for that region. So we gathered this on the system. Now anyone who's working outside their original jurisdiction has access to a centralized repository of information. And we would also publish tools to make this information easier to handle so that we don't require everybody to have a desktop GIS and a lot of training to be able to handle this information. And we will also go beyond environmental forensics. So this is a first map we had of the ALUS images. These are just the footprints, just the boundaries of the data in the images. So we can see that there's a lot of Brazil over there, certainly all the Amazon forests and a lot of other areas where we don't have much imagery. And we were able to publish data with our production, map without production data. So these are forensic reports in the Amazon region. So a forensic expert can click on them and see a lot of metadata. Go to the link to the other system that has the documental information. We have supporting maps, which are data that the forensic expert would have if he was from the region. And we also have data that synchronize daily. And we gather data from a lot of sources that maybe the forensic expert wouldn't have access to. So as an institution, as a centralized repository, we were able to get into institutional deals with other institutions to provide information for us, and we published them here. What is the system made of? We have a database that's fully open source. That's not how it started. It started with an Rcasti database. We had a lot of trouble in migrating. And we have an Rcasti server. We have a legacy web application since 2010. It's working written in Flash. And we're going to see on the roadmap that's all going to change. All of this is going to change. Not everything, the PostgreSQL is going to stay. And we have content. We have a content portal. So we have documentation there. We have news. We have polls so we can poll the users. What kind of geographical reference system do you use? What's the next course you want to see? The next training you want to have. So we have several options there and we act on that. And the physical infrastructure is really small. It fits on a computer rack. It's actually not even in the main IT department. It's not on the data centers. It's in the computer room of the local IT. And we have about 12 terabytes of Raster data, 8 gigabytes of compressed vector data. That's a lot. And 850 plus layers plus 8, 950 plus views. Which some data is displayed in different ways so that we can understand it better. And we have also managed to publish some tools to the portal. So this is a very simple drawing tool that a person on the field can use to draw his whatever map or whatever report he wants to. He doesn't have to use a desktop GIS. So we have a few tools here on the right. He can draw some polygons, edit the polygons, calculate areas, distances. There are some other tools on the bottom of the screen. The layer switcher is on the top of the screen. And there are a lot of layers. So I just showed you a few tools. But we have a lot of layers. It's inside layers. I didn't open it because it was going to cover the whole screen. But this is an example of the administrative layer. So we have our local units, our national units, and the limits of the, the limits of competence of each unit. So if you click on the map, you get information for all of this. And we also have some statistics tools. These stay on the bottom along with some search tools. This is just an example. You select, these are pre-made reports. But you can configure them. Like you can draw an area and say, oh, I want the forensic report breakdown by subject in this area. And in this time, at this time, it will build you a pie chart on real time. We're able to do more, customize it, make it more types of graphs. But that's an ongoing project. And we also, when trying to spread the scope of the project, we tried some different stuff. For example, this is a more of a logistics map. It tells you where stuff got seized. So these are pesticide seizures, a big deal in agriculture. So we have, there's illegal pesticides coming from other countries in the continent. And they are not, the substances are prohibited in Brazil. So they cause cancer and all sorts of problems. And the green dots are where they were seized. And the blue diamonds are where they were analyzed. The forensic examine occurred. So this has to do with the amount of forensic experts who are going to allocate in each region. There are some issues that management should look into, for example. That stays on the south handles most of its own demand. But some others send a lot of stuff to the capital. And some, there are some oddballs here, for example. In this case, why hasn't the unit sent it to their own unit? Why did it send it to the capital? So that's a management map. This is a map from another version of the system. And the amount of cocaine seized in Brazil in 2010, separated by states. No, sorry, it's not cocaine. It's amount of money embezzled in public works. Yeah, that's ugly. And we see a geographical bias there. There's a region where there's more theft of public money. And there's a specific state in there that has a very low amount of money stolen. Why is that? So that's something we should think of as a public, as a law enforcement issue. Are people there not working? Are they being bribed? Is that a better state than the others? What's happening there? So we can take a look at that. So in this time, we learned several lessons. The system started to be developed in 2009. It became operation at the end of 2010. And we learned that outsourcing database development or management is really bad. Because it locks down everything. Somebody else other than you has the keys to your most valuable access, which is information. But we also learned that if you're going to outsource the web development, that's really easy. That's actually a good thing to do because it takes a lot of work to build these maps. You get to have input all over the, all along the process. If you're hiring somebody, you can tell them what you want them to do. And it's separated from your core business. It's separated from your information. They don't have access to your raw information and to your databases. But we also learned that even if you outsource something like web maps, you still need to be able to tweak it and change small things. Because you don't want to do a whole other contract just to change a small thing. That takes a lot of work. You have to understand what has been done. And you have to have the source. Everything has to be open at least for you internally. And another thing we learned that opensource does not cost zero dollars. It is free as in freedom. But it's a total institutional cost. It's not zero as in free beer. Like they say in open source, it's free as in freedom and free as in free beer. But the total cost is not zero. Why is that? Because you have to train people. There's time to implement stuff. It doesn't come ready for you. And also there's something that in Brazil it's harder to hire people to work on open source. So if you have a lot of money, you have to wait that. Because you don't have a lot of developers for you to hire. They don't know that open source products. So you either hire from abroad or you do a closed source project of your own. And one other thing I learned is that you get to do what you want and you get to do it fast because you don't have to go through all the hiring process and all the paperwork and all the red tape and ask people stuff. You just download it and you do it. It's really fast. If the thing you want to do is small and if you want to do large stuff, that's what I told you about before. And proprietary software hidden costs are significant because that's the other side of the coin. What I learned is that cost overall is much larger than expected. So proprietary software still requires you to take people off their regular jobs and go to training. It still requires you to pay maintenance on the licenses and especially for the contracts, it's a lot of work. You don't need only the person that's developing to stop whatever they're doing. You need the logistics department, the financial department to do a purchase which is very labor intensive in public service in general. And we also tried to do the scope change in the beginning. We tried to spread the scope but it's very hard to use. We thought it was very hard to do. And the user based stayed mostly the same. We have the same amount of, roughly the same amount of users. They do have some very different feedback that they had in the beginning. They say the system is much easier to use now, that the data is more complete, that support is better and a lot of stuff. But the amount of users stayed the same. We're still trying to spread the scope but I'm just telling you that you have to have that in mind. If you're starting a project with that aim, you have to think about it. And one other thing that comes from the one on the top. So if the user base stayed the same, what happens with the institution as a whole? Are they thinking this system is important or are they thinking it's just another waste of money and time? You have to build institutional awareness with management. Management, in my experience, usually does not understand open source, does not understand IT, they understand labor hours, they understand results, they understand a different set of data that the IT expert, the IT developers, is accustomed to. So you have to make your report with that in mind. Focus on results. Focus on testimonies from the users on the field so that they're saying that their job is easier now, that they're producing more because of it. Or that they are doing a better work now because of it. And I also learned that curating the data is paramount because I think we bit more than we could chew. It's a lot of data, especially if you're dealing with data without metadata, so you're downloading for a lot of websites and you're getting stuff from people's drawers to put it in the system. That's a mess. So what I wanted to have from the beginning, I still don't have, is some specialists in geo-processing or somebody from a geologist or a map cartographer or a forest engineer or a civil engineer, somebody that you look at this data and separate what's good, what is not, build affiliates on metadata, change the presentation, put the field names. For example, if you retrieve metadata from the system, from Intel, if you click on top of the data, the fields that come are still raw. They're raw from the shape file. So you have to fill all of this for 850, 950 layers and that's a lot of work. And for the future, we're taking from our experience and we understand now that less is more, but only when you know what to select, when you know which of all the stuff you tried is actually good and it's actually going to build a better system. And we found out that search, the metadata pop-ups, the layer switcher are key tools in the web application. There are more and there's more verbose analysis on that, but we identified the main tools that people use and that stop people from using the system because they are either too difficult or they crash or something like that. And we're going to focus on that and we dropped stuff and whatever is left has to work really well because it will compete with other workflows that the person has. So the system like the designer from, sorry, I think it was, the keynote speaker from yesterday told that the tools, they're not the objective in themselves, they're built to make, to do a task, to make something easier for you to do. You have to keep that in mind and you have to add value to your tool. So whatever is left has to work really well or else it's not worth having it. And the big thing next is separating GIS and IT because right now to get a professional that knows both GIS and programming, both GIS and IT, in here at PhosphorGIS is really, really easy. You bump into people all over the place, but from my experience it's hard to get people that know both. So if you can separate the system and the infrastructure to delegate parts of work to IT people and parts of work to GIS people, that's much easier for me. I get qualified personnel to do both, but I don't get qualified personnel to do both at the same, I just the same person do both. I get two people but not one that can do both. And that's pretty much it. Last PhosphorGIS I've been to, I awarded some bounties for people who solved bug reports or feature requests that I published on the OSGU track. And this time I have some swag too. So I have a t-shirt and I have a beach outfit. You can see I've been using one of these in the corner. And I have some mugs. Some of them are really nice. And please stay tuned. If you care for these, they're going to be awarded the Cold Spring on Saturday. And I will publish all the information about the bug reports and the feature requests on the wiki for the Cold Spring. So PhosphorG 2014 Cold Spring wiki. There you go. It will be there eventually. This is Oliver. This is Taichi Furuhashi from OSGM Japan. Paul, it's George Hosche from Portugal, OSGU Portugal. Ivan, Frank, do you know these people? Oh my God. Okay. Maproxy. Gildal. I don't know what he does, but he's really cool. Taichi was the president of OpenStreetMap Japan at the time. Paul is the core developer of Post-CIS, is the founding member of Post-CIS. And George was the president of OSGU Portugal at the time. So thank you very much. If you want to have questions. Sorry. Here, my contact. Talk to the folks. Oh, was there any one event that you had migrated with the software? You may, if there was anything I had to migrate from closed source? Was there anything that happened that led you to try to migrate away? Yes. When we were gathering information, we were trying to put it onto the database. ArcSCE gave me a lot of trouble. And I went to talk to some of the developers and they said I needed training to handle like, I don't know, 50 metadata tables that they had on ArcSCE. I got really angry at the time. Really. And I spent more than one year trying to break free from ArcSCE. And now, as the other has digital ArcSCE, so I think that's a pattern. So it's really bad. And ArcSCE server had a lot of issues, especially in older versions, to handling raw Post-CIS. So you had to bundle with ArcSCE to handle it well. And I opened at the time. It was a long time ago, 2011, 2012. I opened like 40 support requests that I think only one or two got actually solved regarding this integration. So it was really, really a big issue. After I changed everything, it was really easy to build connectors and stuff for the other databases. Thank you. You talked about the attitudes towards social media and trying to change people's attitudes. Sorry, you talked about the attitudes of people and towards social, sorry, towards open source. Okay. Sorry. And you ran into some challenges along the way. Did people have specific reasons not to go to open source or was it just because they didn't know about open source or just afraid of what was unknown? From what I've got, the closed source people do a really good job at marketing. And they are present, especially as I was present in governments. And don't take me wrong, I like ArcSCE desktop a lot. I also like QuantumJS a lot. So, okay. But I like ArcSCE desktop. But ArcSCE was a failure, in my opinion. But they don't know this. And you have to build awareness on the pitfalls. So if you want to integrate, if you want to talk to other people, if you want to do this, don't go there. Go to this place elsewhere. At least let us work a little bit more to make this work with that. Did all your data layers already have location information or did you have to geocode some of those layers to get them back to you? No. Most of them were ready-shaped files. So they had location information. I like half of them didn't have a projection. So I had to find for each one and build scripts to check consistency and stuff like that. Thank you. Thank you.
|
This is a case study about using WebGIS for fighting crime from a forensics standpoint.When one thinks about GIS for law enforcement, vehicle tracking and messaging immediately comes to mind. Inteligeo provides support for law enforcement in a different manner: it provides information for the forensics examiner.The system became operational in November 2010 and it now has more than 850 themes and 950 data visualization layers, and is available only inside of the Brazilian Federal Police internal network. It started as tool for fighting environmental crime, now it covers a wide variety of subjects such as environmental data, chemical analysis of pesticides, mining operations, public works fraud and legal status of rural properties. We also have a raster data repository and integration with databases from several institutions.The system uses a fully open source database. After a difficult migration from a proprietary database, the data can now be handled not only by the proprietary GIS framework, but also by tools from the open source ecosystem and by our own maintenance tools.During almost four years of operations, some lessons have been learned and the initial strategic plans have changed. We will discuss issues such as what can be outsourced and what has to be made in house, decisions regarding the database, experimenting versus focusing, key features (our) users use most, demonstrating value to internal management.
|
10.5446/31674 (DOI)
|
Hi, good morning. My name is Dan. I'm here to talk about open source and social media aggregation. That title is a bit of a mouthful and I, in hindsight, I wish I had titled it something simpler like finding the needle in the social media haystack because that's generally kind of what I'm going to be talking about and whether you're trying to find the emergency management related needle in that social haystack or anything else, it is a needle in the haystack. We're going to talk about some of the tools that we've used in our open source project to do that. I'm glad to see Thomas Holderness is here today. He gave a great presentation yesterday about Map Jakarta. If you didn't get a chance to see that, definitely go take a look at the presentation online because he talks about social media and how they use some of the same tools and there's a bit of overlap in our projects in terms of the technologies and the challenges. So Thomas is actually here today but I would highly recommend that. Take a look at that. My name is Dan King. I'm a software developer. I'm based in San Diego. I do a lot of work for the government of Pierce County up in the Seattle Tacoma area. And these slides, this is just a simple reveal.js. It's the single HTML page. If you see those little buttons down there, we're all using the same technology. They're online right now, viewpoint.pro.slashphosphorgy. The project I'm going to talk about is called First to See. The spearhead leaders of the group are the government of Pierce County, Washington and the Pacific Northwest Economic Region. The actual project is for all of the agencies in the Puget Sound region to deal with a widespread disaster where social media might come into play to enhance situation awareness. And so we have federal state local government partners, some of the tribal communities, some of the business communities, some of the large ports along the Puget Sound region, some of the large container ship companies are all interested in, you know, how we can use social media during a large scale disaster. And we have the Coast Guard. We're working closely with the Washington National Guard. All these agencies come into play and try to work together in an incident. And there are challenges with that and then adding social media to the mix provides another challenge. So we'll talk about that. But first I'm going to tell a little story. About 34 years ago, a spring day, my little brother comes running into the house. He comes shouting, look up in the sky, look up in the sky. You can see Ash Cloud, Mount St. Helens has erupted. Look up in the sky. You can see it. You can see the volcano. And he was scolded for telling lies. And he was made to sit down and hear the story of the boy who cried wolf. And all the while, Mount St. Helens was in fact erupting. And from, we lived in the Puget Sound area. And the wind was blowing all that ash to the east. But from our vantage point, we could actually see the dust cloud rising. And sure enough, you know, he was telling the truth and we just didn't believe him. So if that happened today or if my little brother had today's technology back in 1980, he would have just tweeted it. And then the family could have gone to Twitter and see what's trending. And I don't know if you're going to read that there, but I've got the Seahawks and the Sounders and I think the folks up in the Northwest are going to be tweeting about the Seahawks regardless of what else is going on. But we would see Mount St. Helens and then the Ash Cloud all being tweeted to corroborate his story. Okay. So the idea of social media and how that relates to an agency trying to use social media, we have our existing situational awareness tools, you know, everything from 911 to command centers. And social media doesn't replace any of that. It simply augments it. So we've got our existing tools and now social media is just a new piece of that puzzle. Not meant to replace anything, just augment what we already have. Okay. Not that there are, out on Twitter, aren't those who do cry wolf. Just last month, a disgruntled gamer tweeted a bomb threat about the plane that a Sony executive was on. And his tweet was taken seriously. The plane was diverted and the disgruntled gamer is in hot water right now. In time. It was only a month ago, but I'd say in time. But on a more serious note, during the Boston bombings, at the Boston Marathon bombing, that that's a good example of taking a look at how social media could be used in a big disaster. So you've got a couple of things going on there. You have people taking pictures with their cameras and tweeting about it and putting Facebook posts up, you know, just a family fun day out. And then you have this incident and then suddenly you have a lot of that information to use to put together the pieces. And in fact, there's a real interesting article about, there's a Facebook post that someone had captured one of the suspects on it. You know, and that's an example of how we can start to use social media. The Boston police in particular use social media to communicate out to their citizens. We're seeing the PIOs of various organizations using social media as a public information tool. But we also have a tool for community surveillance, if you will, all these little pieces of the puzzle kind of coming together to paint a bigger picture. But still, like during that time, there's a really good article of what went wrong with social media, what Twitter got wrong. So, you know, it can't be taken as the gospel truth. And that is one of the challenges of social media and something that the naysayers will talk about, they'll talk about the false rumors and that's part of the problem. To which we say, yeah, but still. Yeah, but still, Mount St. Helens, still full of volcano or eruptions. So this is the maps of the depths of the earthquakes under Mount St. Helens since the eruption. So what's going on here is we're all sitting kind of at the edge of the Wanda Fuca plate and the Pacific plate is pushing on that Wanda Fuca plate, pushing it into the North American plate, creating the subduction zone. So here we are in Portland and our subduction zone as we sit here is, I believe, I think that's west there and that's east. So underneath us, just slowly but surely, pushing us into the North American plate. And the result of that is for the last three and a half millennia, every 300 to 600 years we're getting a catastrophic earthquake, like a 9.0 earthquake, something that's going to be devastating. In fact, in the Puget Sound region, the military, the Navy is planning an exercise to deal with what happens when all the bridges go down, what happens when all the airports crumble. And part of that exercise is going to be a big five day event with naval ships parked out in the Pacific to act as mobile hospitals. And we know a lot of the communications is going to go out. But along the way, we can be sure that we're going to get some social media that we can start to harness. But at any rate, back to the subduction zone. So the last big one was the 1700 Cascadia earthquake. And there's a real interesting article about the geological changes that happened during that big one. And I believe they don't know the actual, but an 8.7 to 9.2. But that's big enough for the tsunami that was generated here was felt all the way in Japan. That's how big that earthquake was. So let's do the math. Every 300 to 600 years, the last one being in 1700, meaning we're in that window right now. And the article kind of goes into detail, the substats about a 10% chance that we'll see that that big one in our own lifetimes and all these other, the speculation. But this interesting point, geologists and civil engineers have broadly determined that the Pacific Northwest region is not well prepared for such a colossal earthquake. So we're going to plan for it. Let's hope for it. Let's hope for 2300 and not the year 2000 or 2050. And it's not just earthquakes that we've got to worry about. Remember a year ago, the asteroids came down in Russia? And people were tweeting about it. Now, good luck getting grant funding for an asteroid hit. But the thing about a system in social media in general, if you design it for one event like a massive earthquake, well, when the oil spill happens or a dirty bomb or a man made disaster be it accidental or otherwise, you have your tools in place to deal with these large scale multi jurisdiction problems. And although we don't know what is going to happen and we don't know when, we know it's going to be bad. And we know people are going to tweet about it. That's what we do know. And so what we want to do is take those tweets and take our situation awareness and augment it with the situation awareness. And so with that, we've developed our program called First to See. And this is the homepage website, first to see.org. I should have put that up there. And the actual First to See platform is a mobile app. And you can actually go online and download the apps. We vet people. We're in the process of vetting 6,000 National Guard troops with the app so that when the public submits something that they see, it's kind of of the see something, say something genre that you see at the TSA. If the public submits something and say a vetted National Guard submits something, the guard is pre-vetted, that information. But anything is we can take those points coming in from the field and overlay it with social media to get an augmented situation awareness picture. And then we actually have an API so that our partners who have their own mapping systems can harness this information as well in their own system. So I'm not going to spend any time today talking about the mobile app portion of it because that's a proprietary piece. But you can download it from the app store. But the neat thing with the open-source software is that you can integrate them together. And so here's the first glimpse of our open-source software. This is an open-layers map interface with GeoServer running on the back end to show some shipping routes. We can click on the points and see the photos of both social media and the reports. And the social media tool that I want to talk about today is called Swift River. And that is from the Ooshahidi group. Has anyone heard of Ooshahidi? Okay, a lot of people have heard. So they came about in Kenya during the Kenya elections, kind of crowdsourcing. And then during the Haiti earthquake, they were used extensively to help the disaster relief there. And Swift River is a spin-off on their flagship Ooshahidi platform that's designed to look at social media. And what we really like about the Swift River platform is this water paradigm that they've come up with. And so the idea is that an individual tweet, a single individual tweet is called a droplet and be that a tweet or someday a Facebook post or an Instagram. And then all the oceans of the world contains all the world's tweets. And so then you take that droplet and put it into a river, a subset of those oceans is a river. And that's something like a hashtag for an event. Is your river and generally a lot of droplets in a river. Too many to really work with. So then this notion of a bucket where you take that river and scoop out some important droplets from that bucket and then you work with that information in that bucket to create a PDF report or an Excel report or do what you need to get that augmented situation awareness. So we're going to talk about the Oso landslide that happened, killed 43 people up in the Snow Homish County area, a rural area part of Washington. So in the immediate aftermath, the Oso slide hashtag and the 530s hashtag came to prominence. So we set up a river trained towards the Oso landslide and then put a filter of missing. So let's take a look at these words missing and Oso slide. So in the world of Twitter, my son is still missing from the Oso slide. Here's a Diana Ross remix of missing you tops the charts and the Oso landslide death toll reaches 41. Okay, so immediately we're just taking the Oso slide hashtags that brings us down to two and because we're looking, it was a search and rescue effort that got underway that we were looking at the tweet about the missing person ended up in our bucket. And here's a screenshot just of the interface. Swift River comes with its own interface but because they have an API, we were able to create a custom interface that we could, you know, use to create our own look and feel and integrate with our branding and our mobile app information. And then here is the final bucket that we were able to put out to, in the PDF and Excel format. This is just a quick screenshot of our first D3JS. This is the drop count per month of all of the rivers that we've been tracking. And so there's a lot of them we've collected. We've been in production for a little over a year and we've collected over 6 million droplets at this stage. We have a couple of spikes there in July. We did a test. We created a river on the keyword fireworks and that brought in like a million tweets that weekend of all the fireworks tweets, you know, all over the country and the world, but mostly the country. And so that caused a spike there. And that's part of the training process to train people, you know, if you want to watch the fireworks, add a big gathering, try to find the hashtag that's appropriate to the area because flood or tornado or storm, I think that's what happened in March as well. We did a, or actually someone was testing it out and did a filter on the hashtag weather and that just brought in, you know, millions of tweets that weekend. We're going to talk about the first to see stack. We're running on a lamp server on the Amazon Cloud. We have the Swift River, which I just mentioned there, and then we also have the Twitter API. To find out what's trending, for example, Swift River doesn't provide that capability, but because we have access to the Twitter API, we can get that. We have open layers and map question, open street map as part of our mapping stack. And of course, we saw a bit of D3JS and we've only scratched the surface. There's a lot we can do as we saw from Mike's talk yesterday with visualization and taking all this information. We have, you know, the content, we have time, we've got photographs, we've got date and location and there's just a whole wealth of opportunity to use something like D3JS to get a picture. And so this is just the first to see stack puzzles. Actually, there's a few more pieces to that puzzle. And then that Swift River piece there is also a lamp stack. It's written in a bunch of languages, Java, Python. We've got some PHP there. We've got Apache Solar doing some searches. It's got RabbitQ doing some messaging. And, you know, if you're going to go install this, you know, as Daniel said, it's free as in speech, not free as in beer. You have to have someone on the ground who knows these things and can help you get them installed. And so you take Twitter API. This last one's a neat feature about Swift River. It includes a portion of the stand for natural language processing. And what that attempts to do is take the text of these tweets and tries to find a location for them. So the tweet about Mount St. Helens, it has the coordinates of Mount St. Helens in its database. And so even if that tweet wasn't geolocated, it would be able to put a point on the map and give us a rough idea. Now, those tweets are in aggregate. And so anybody on the mountain or in the mountain area are tweeting about it. It would all be one cluster. But the neat thing about the Twitter API is we can also pull in the coordinates of people who are tweeting. And the neat thing about that is that the number of tweets with actual coordinates is increasing. When we started looking at this a year ago, even just a year ago, the number we were saying and looking at was 1 to 3 percent of all tweets with an actual live coordinate. Last month we read a test and that number was up to 7 percent. And that's a huge improvement. And it may not seem like much, but when you talk about a million tweets, you know, that's 70,000 points. And even if you just looked at points on a map, those 70,000 points in and of themselves tell some sort of story. You know something's happening in that cluster of points. And so there's a glimpse of what we're dealing with. And again, one of the challenges is if one of those pieces breaks, the system either doesn't run properly or doesn't run at all. And that's one of the challenges we'll talk about. If you have a chance, take a look at this. Again, these slides are online. There are links. The Seattle Times has a neat webpage where they overlay the landslide area with a map of and you can see, you can do a slider there of that mudslide coming down. And this was a real rural area and it caused widespread damage. And this is, you know, an example, a microcosm of what the big one might do. And imagine a lot of these landslides all over the place and you're going to have, you know, people tweeting about, in our case, missing persons that we'll see in a minute here. So after this, what we did is we set up a river. We used the Twitter API to see that Oso slide and 530 slide. Well, you can just go to Twitter and find out what's trending. But needless to say, we found out what was trending. We put, we brought in about 20,000 tweets that first day. Because it was a search and rescue effort, we filtered on the keyword missing to put them in the bucket and that got us about 1,000 tweets, sorry, also like droplets into the bucket. And then that number of 1,000 was small enough for us to go through and highlight individual ones and put them on a PDF report and send them to the front lines. And I've taken snapshots of a handful of those two dozen tweets that we sent to the front lines. And now to be honest, I don't know what effect it had on the front lines. But the fact is, it was the first time we were able to take a real live incident and use social media to get some information to at least help augment the search and rescue effort. So this is a photograph of some of the damage. And again, there's the first to see slide, hashtag, there's a word missing. So we have some information about his 13-year-old son. So that gives us some information about who's still out there missing. Here is another person missing. He's still head drives some location information. Here's an important one. I'm going to talk about this one in a minute. So thanks everyone for your concern. Our friend Tom, he's still missing. Please pray for his wife Deb. And then we start to see. Now this is an interesting one here. We're not gathering Facebook and Facebook is closed. There is some ability to gather some information. But this person here had access to their friend's Facebook post and took a screenshot of it and then tweeted it on Twitter. So that was an interesting way for us to get some Facebook information indirectly. And then of course, we're starting to see photographs. And the photographs are really important for the search and rescue effort. Here's a person. This is Steve and this is someone who spoke to his brother. He's missing. And here's someone's grandfather who's missing. And these photos there, even though we have hotlines set up and people can report them, for the people on the front line to get a piece of paper with these pictures on them, we think that's really valuable and is going to assist them in their efforts. There's a really good article about how social media was used in First to See in particular in the emergency management blog there. But let's talk about some challenges along the way. So a couple of challenges we have about attitude. We heard the challenges about open source in the last session. In addition to that, we have the challenges of social media. I'm going to talk about number three at the moment. So a year ago, two years ago, we would go into agencies and they would say, you know, no social media. We don't understand it. We don't know how we're going to use it. We don't want it. And the conversation today goes more like this. We don't know how we're going to use it. We don't understand it. But we think we want it. We think we should be looking at this. And so the attitude is slowly changing. And the other attitude, too, of course, is people about their location. And that jump of 3% to 7% of people who are letting Twitter show their location is an improvement. And whether that's a changing attitude or people around their mobile, more, it's good for us. I know my time's running out here, so I'll just quickly, the other challenge is the high, sorry, the low signal to noise ratio and then the complicated stack. The signal to noise ratio. I did not know this, but apparently God has a Twitter account. And he gets a lot of tweets. A lot of people tweet to God. And in fact, after any sort of disaster, that is one of the main hashtags is, you know, pray for so and so. And so the temptation is, okay, well, let God have his tweets. We're going to ignore all of those. We're just going to focus on our own ones. So remember that last slide, my friend Tom is still missing, pray for his wife Deb. Well, that's valuable information, okay? We can't ignore God's tweets. We need to look at God's tweets as well because there is valuable information in there. And it's not just God who adds to that poor signal to noise ratio. We've got the American Red Cross, give blood, and they'll tweet that over and over and over. And then the PIOs during the Oso incident, the hotline for missing persons, the news retweets, okay? And I'm getting the signal that I need to wrap up here. Okay. So obviously this is a challenge, that complicated stack. So a solution, jack of all trades, master of none, that's me. Ideally you want a superhero like the Ushahidi developer, Emmanuel Kala, who's been real instrumental in keeping us afloat and getting us running. But the real solution is many hands make light work. And that's where the open source community comes into play because, you know, as more of us get involved with the Swiss River Project in particular, and as we get involved, we're going to expand it to include Vine videos, Instagram, and Facebook posts, you know, we can all benefit and we can all do our part and together, you know, have a better platform. So I'll just finish with a couple. You can see these links here. Eric Holderman, he's from the Pacific Northwest Economic Region, talks about the social media attitude towards disaster management. We saw that one. This is the IT director, Linda Jarrell at Pierce County. She was featured in an article talking about making a case for government technology in general. It's a really good article and she also goes into first to see in the social media aspect. And so I'm going to wrap up with this final slide. It's a jungle out there. So the horse is saying, you're right, there is a needle in the haystack to the cow. And the point being, yes, the needle in the haystack has always been the challenge, but he's got the tool there. He's got the electric sensor, the metal detector. And that's kind of what we have with the Swift River and the open source tools with all that social media. So it is like hunting for a needle in the haystack, but fortunately we have some tools. Slides are online. There's my contact info. And I think I have time for maybe one question or two, two questions. Yeah. So when Cascadia happens? Hi. When the Cascadia earthquake happens, how long until the cell network saturates and nobody's tweeting? How long until, so the question is, when Cascadia happens, how long is a cell network? We don't know that. And a huge one, we might not get any tweets except for the news media that the send upon the Northwest to tweet about it. And that's certainly the case there. We know that the phone networks are the first to go down, so that there is a time period when the tweeting is better because once you send, you're here sending a tweet, Twitter can't send it because it's mentioned. So you walk outside and it sends it. So the vast majority will be lost, but we're hoping to get at least some of them. Do you recommend hash tags or do you search for possible hash tags that users use? Yeah, that's a good question. And the question is, do we use hash tags or just search strings? It can actually be either. We recommend hash tags because hash tags tend to be specific to an incident like Oso landslide or the high-expert 30 slide. They're very unique to that area. The problem which just in terms is a term like storm or by-words is going to pull in a lot more information from other areas that we're not concerned with. So we recommend hash tags. Thank you everybody.
|
This paper is a case study of FirstToSee, the social media situational awareness project organized by agencies in the Puget Sound region of the Pacific Northwest.The purpose of FirstToSee is to capture, analyze and map social media using open source tools. The desired result is improved situation awareness through a clearer operational picture which in turn assists in providing a more targeted response.The Puget Sound is an actively seismic region and is prone to massive earthquakes, such as the 1700 magnitude 9 Cascadia earthquake. The next such earthquake in the region will cause widespread casualties and damage. Inevitably, when the ÔBig One' hits the public will turn to social media to report what is happening in their area. The collective information has the capacity to assist in assessment of problems and direct resources where needed most.The project stack includes SwiftRiver, PHP, D3JS, GeoJSON, OpenLayers and GeoServer as well as complementary iOS and Android mobile applications that allow trusted sources to report additional critical information during an incident.The open nature of social media provided numerous opportunities to test the platform. Since deploying in May of 2013 we tracked Twitter activity at major Puget Sound festivals and gatherings and real-world incidents around the globe. The first regional disaster since launching came in late March of 2014 during the Washington State Oso landslide. The paper continues with analysis of the keywords searched, content gathered and methods for geolocating, aggregating and disseminating the information to emergency responders at the scene.
|
10.5446/31675 (DOI)
|
Today I want to talk about some of the work that my colleagues and I are doing at EcoTrust around climate change. And so we've been working a lot in the Pacific Northwest natural resource areas, namely forestry and agriculture. Today I'm going to talk about how we're applying this to agriculture. It's a new project, not all the way through, but I think there's some really interesting kind of tidbits that might be helpful to you all. So this report came out this summer called Risky Business, is an economics report about the potential economic impacts of climate change. And they looked at the normal things like sea level rise and heat exhaustion and all those things. But one of the, they also looked at commodity agriculture. And one of the things that they concluded was that the agriculture industry actually was the best prepared to adapt to climate change. Because they can plan a new set of crops every year, etc. And then this was, they wanted to say, and this is the quote I love seeing as a GIS analyst and data analyst, armed with the right information science or farmers can mitigate some of these impacts. And so immediately we think, what information do they need? We haven't really answered that question, but we've kind of developed a toolkit to sort of try to answer that question. So I'm going to go over kind of the conceptual framework that we use called biochlamatic envelope modeling. And then show you some preliminary results to kind of see what we can do with this framework. And then look at an actual implementation, hopefully one that you guys can apply and think how you can learn to use it in your own problems. So this, I mean, really this is the whole talk right here, or the conceptual basis of the whole talk. Biochlamatic envelope modeling, sometimes called species distribution modeling or climatic niche modeling, effectively you take observations of a species usually. In this case it's green for the species present, white circle for the species absent. And then you want to, in this attribute space, differentiate suitable from unsuitable climates. So this is a really naive approach, models will do much better, but you can think of drawing a line in this attribute space. And this is two dimensions, most of the data that we work on is, you know, nine, ten, up to 30 dimensions of data. So you can imagine this boundary is an align, it becomes very complex. But effectively it's the same thing. We're trying to take a new observation of temperature, rainfall, any of these climatic variables, plot it on these axes, see where it falls, if it's inside the circle, suitable, if it's outside, it's not suitable. It's a really simple model. Some people would say it's really simplistic. It doesn't take into account biological adaptation, it doesn't take into account interactions with other species, it doesn't take into account migration and seed sourcing and so on. It's a flawed model. It's wrong. So this is one of my favorite quotes when talking about this model because it is wrong in all those ways. It's very simplistic, but it's also very useful. It provides a first approximation of vulnerability to climate change. And the way I usually describe it is this. If you're growing, let's say you're a farmer, you're growing winter wheat today, your future climate is projected to be unlike anywhere that currently grows winter wheat. That's a red flag, that's an indication of vulnerability. So when we talk about climatic suitability, you know, we draw that line on the axes and say this is suitable, this is not suitable, it's not black and white or in this case purple and green. There's a range of variability in there. And this certainty, or this range is actually we interpret as a degree of certainty. In other words, the dark green, we are very certain that in this case this is Douglas fir, one of the iconic tree species of the Pacific Northwest. This is the dark green is that's, it's stronghold. That's where we're absolutely certain that it's viable. The purple, absolutely certain it's not. And in between, you know, depending on the conditions may or may not work. So how do we draw that line on those climatic axes? So we use a technique called supervised classification, comes out of kind of the machine learning literature. And there's a lot of different techniques to do this, but this is sort of the general overview. You take your training data where you know, you know your X and your Y. The X variables are your climatic explanatory variables that you think kind of derive the distribution of the species you're interested in. And then you have the Y variables, your observations of whether that species exists or doesn't exist there. You draw a relationship between those X's and Y's. And it's, I used a black box, it's only semi-ironic, but it's the black box really, what goes on inside the black box differs depending on what analytical technique you use. So you might have heard of decision trees or logistic regression or neural networks. These are all different kind of mechanisms for drawing that relationship between X and Y. But when it comes down to it, they all do effectively the same thing. And that's given a novel set of explanatory variables, predict the response. So in this case, we're predicting, yes, it's suitable, there's a 90% chance that it's going to be suitable for that species. And so each of these observations, you can think of this and on a raster, all this work is done in a kind of a raster data model. And so you can think of each of these observations as a single pixel. So for every pixel, we're trying to predict how suitable it is. So just taking a step back, another kind of a little brief background on climate models. I'm not a climate scientist, I just use the data they produce. So it's helpful to maybe look at the background a little bit. First, they start with an emission scenario. Future, you know, our contribution, our, you know, human's contribution to greenhouse gases. Those get fed into what they call a general circulation model. There's dozens of these. They, they're four dimensional models, really complex, they model the Earth's processes, which give you a predicted future climate, usually at daily time steps, well into the future, at a very coarse spatial grid. Those coarse grids are then kind of calibrated to local weather stations, so you get this downscaling, it's called. And then finally, in order to get some sort of meaningful metrics, you do some sort of temporal aggregation. In other words, you don't care if it's going to rain on June 22nd, 1978, or 2078. You want to know the average rainfall for June in that decade, for instance. So our goal is to take this technique, biochlamatic envelope modeling, and apply it to food production zones in the Pacific Northwest. Actually, in our whole bio region, which includes California and all the way up to Alaska. And develop the whole workflow using open source tools, well documented, and we've actually developed a couple kind of utility tools to go around this and make the workflow a little easier. So just getting into kind of the specifics of this project that we're working on. These are the nine explanatory variables that explain roughly 96% of the variants in food production zones. Food production zones are sort of these contiguous areas that have similar characteristics for agriculture. And these variables, including the ones in italics, are the climatic variables, are the ones that actually drive the majority of the definition of these zones. So you can think with these climatic variables, we have present day climatic variables, but we can also swap in future climates, as predicted by the climate models. So those are our X variables, the explanatory, and these are our Ys. This is what we're trying to predict. And so this is a map of California, Idaho, Washington, and Oregon. We cut it off just above the country, the national border, not because data wasn't available, but partially because data wasn't available in Canada, and also because not a lot of agriculture happens in the northwest area of Canada, on up to Alaska. So mostly just the states. But these zones here have been defined through another process, and so what we're trying to do is basically predict where these zones might shift in the future. So we plug in those future climate variables, and for each pixel try to predict what the most likely zone is. So this is our first cut at the 2070, what the zone is in 2070 might look like. So a couple of things to note. The coastal areas actually don't change all that much, if at all, and that's consistent with a lot of the climate models, which say inland areas are going to experience much greater temperature rise. And then you look at some areas like this is the kind of, so the red basket of Washington, they grow a lot of wheat, apples, it's a very productive region, you see that shifting a lot. So they're going to see a lot of novel conditions over the next 60 years. So that indicates, sort of, I guess the take home headline for this map would be that vulnerability is geographically variable. Just because the climate is changing globally doesn't mean we'll all experience it the same. So there are some areas that may shift more dramatically than others. That was sort of the predicted, you know, this is all of our zones, right, and so it's the most likely zone. You can look at a given zone, and this is getting down to a little bit more like fine scale. You can look at a given zone and see what the probability of future climates being similar to that zone are, and then animate it over time. So this is an animation of the Willamette Valley Food Production Zone. And what we can see, so this is the low emission scenario, so this is basically as if, this is modeled as if humanity got its act together and sort of reducing emissions today. What the things that strike me about this is the Willamette Valley conditions are sort of shifting northward. You look up to Bellingham, and Bellingham by the end of the century is actually going to be fairly similar climatically in terms of agricultural productivity, presumably, as today's Willamette Valley. But the Willamette Valley itself, aside from maybe that little western edge over there, stays roughly, it remains roughly the same conditions. If you look at the high emission scenario, excuse me, this is sort of the business as usual, if we continue kind of emitting greenhouse gases at the rate we're currently doing. You see that same northward shift, but you also see that the Willamette Valley is actually transitioning into, or with some probability, will transition into a hotter and drier sort of agricultural climate. So it's not necessarily that it's a vulnerability, you can also see the glass half full and see as an opportunity. Especially, I like to think, you know, Pinot right now, Pinot Noir grapes only grow best in the Willamette Valley. But as we see, Bellingham may be the next place to buy Pinot, so it's an opportunity, not a vulnerability. We can apply the same thing to individual crops, and this is just, I literally did this two days ago, so I don't know, this is just kind of the edge of where we're going. But looking at the productivity per acre yields of different crops, and seeing how those are affected in the future. So this is winter wheat grown a lot in Washington and Idaho, and this is where those yields, we run them through the kind of similar bioclimatic and malo modeling approach, and this is where those yields might occur in the future. So we see that there still is a lot of viable area for growing winter wheat. It's just not necessarily the same area as you see today. So this is getting towards some information the farmers might actually use in the future to adapt to climate change. So here's just a good indication, the purple is the areas that are sort of losing productivity, and the green is areas that are gaining productivity. So again, vulnerabilities and opportunities. Okay, so how do we do all this stuff? This, we use Python for scripting the whole process, and use all the open source tools here. So this, I'll just go through real quick. Rustario is, Sean's given a talk on that already, so hopefully you guys have seen that, but it's great data access for raster data. It's beautiful, it's simple, it does what it says. That's built with GDAL under the hood. Geopandas is a project that I'm working on contributing to, I should say. That is great. I don't know if anybody's used pandas for just working with kind of two-dimensional tabular data structures, but in my mind it replaces Excel, and Geopandas adds geo capabilities to that. NumPy for sort of the array data structures, raster stats, which is a module for doing kind of vector on raster operations. Scikit-learn, and that scikit-learn is the machine learning. That's the supervised classification algorithms. And then Pyimput is the software that we're developing to kind of tile of it together. It's mostly just like a set of utility functions that kind of clean up a lot of tedious work that you would otherwise have to do. So, loading raster data. This is sort of a rasterio, you know, canonical example of rasterio, but you get, you open the raster, you read the data in, you get the metadata, simple. In a Python environment, I do a lot of work in an IPython notebook, an interactive environment, so I don't want to go from sort of an in-memory representation of the data, jump it to disk just to be able to look at it in QGIS, so it helps to have, to be able to visualize it in the session that you're working in. So, I just, you know, we just have this map plot lib function that plots your data. So here we're plotting the rainfall data. It's not quite as pretty cartographically as say using QGIS or another desktop package, but it's really good for just inspecting your data live. So the training data, this is sort of, this is the crux of it really, this is your model will only be as good as your training data. So your explanatory climatic variables, I've listed them all there, it's basically just takes them as a list of paths to your rasters. Your response variable or your y variable there is just a path, and Pi impute provides this load training rasters function that just takes your lists of rasters and pulls them into the data structure that's required for scikit-learn. So there's little detail on that. So the band, in other words, your raster band, this is a really small example, 200 by 140 pixel image. You notice the training data flattens that out. We have 2800 pixels, and so what it effectively does is it removes that two-dimensional kind of spatial component to it and just treats each pixel as an individual observation. So you're sort of, you're removing any spatial dependence and you're really treating each pixel as a separate observation. If you're, a lot of times you'll have your observation data not as a raster but as a vector. So you'll have, for instance, points, point observations where you've gone out and taken, put a plot down and taken a vegetative sample or something. So if you have your data as a vector, typically points, this also supports the same sort of workflow. You define your list of rasters but then your explanatory variables can be explained by, can be explained by a vector data set and you name what field contains your response variable. So all the nitty-gritty of how this kind of relationship between the X's and the Y's gets built is all the kind of details of scikit-learn. And this is a brilliant machine learning package. It really, really saves a ton of work and if you're doing anything in sort of predictive algorithms, definitely give it a try. One of the great things about it, well, I'll get to that in a second. So there's kind of two, there's a couple steps you go along. First is you just instantiate the classifier and you fit your data to it. And that fitting is that drawing of the relationship between your X's and your Y's. One thing to notice, I've commented out a bunch of different classifiers and these are all, these are all, you know, there's at least a dozen, possibly dozens of them in scikit-learn. What this allows you to do is pick a different kind of mathematical model for how to draw that relationship and then use the same exact API. You can just literally comment out one of those lines and then rerun your analysis. So this is, we started doing this work in R and, you know, we wanted to switch from, you know, technique A to technique B and that means like switching libraries and you have to, you know, kind of reconfigure your data formats and your data structures completely. This allows you to literally just comment out a line and try something new. So it's really great for experimentation. Once you fit that model, that relationship, you can evaluate the model using a couple different metrics. And so, you know, I'll just kind of breeze through this. The confusion matrix kind of gives you a sense of like where you get the false positives and false negatives. You have an overall accuracy score, 83%, it's not all that great, but work on that. Feature importances, so in other words, how much did each of your explanatory variables contribute to the predictive power of the model? So in this case, you know, feature 10, which I don't recall exactly what that was, was the most important. In other words, it drove the most, explained the most variability in the data. And then you can do cross-validation, which is basically leaving out part of your data set and using the rest to predict that part that you left out. So that process is very iterative, very hands-on. It's very much finding a model that really works and really explains your data. So once you have a model that you feel has a good amount of predictive power, you can plug in future data into that. So unseen predicted future climates. So in this case, we were using 2070 data, we plug that in, we load the target data, and we use this impute function from pi impute to actually run pixel by pixel to actually run the prediction for that geographic space and then write the whole result out to rasters. So what does that look like? You just get a response raster data set, which is sort of the most likely zones. And so in this reduced resolution version, it's kind of hard to see the pattern, but those are our predicted zones. It just predicts the most likely. You can also look at the probability of any given outcome. So in our classification example for the ag zones, we had about 180 agricultural zones. You can look at the probability of any one of those zones occurring. So it's not, again, it's not just a binary yes or no. We can say what's the probability where of the occurrence of the conditions that define zone 127 occurring in 2070. And you can get this probabilistic surface that sort of gives you a sense, when you're starting to make decisions, it becomes really important to give a sense of the uncertainty in your estimates. And because all this data is just loaded up as numpy arrays, if you want to do any sort of further analysis on the results, just do array math. So for instance, if you want to see the difference between the future zones and the current zones, literally you can just run the numpy not equal command and subtract them out. Oops. And you can get a map of where the zones changed and where they didn't. So there's a lot of further analysis you can do just in numpy alone, just working with arrays directly. Okay. So that's all I got for slides. I am, we're about a third of the way through this project. And I think the first third was really kind of devoted to sort of gathering a lot of data and building these techniques and becoming comfortable with these techniques. The next two-thirds of the project, we're going to be actually diving in and figuring out how to apply this data and get real meaningful results for farmers in our area. So if you have any ideas about kind of next directions that we can take this or things to explore, come talk to me. And in the meantime, I hope some of the tools that we've developed to do this might be applicable to some problems you guys have. Thanks. Applause Can we do some time? Some questions? I'm curious about how time figures into what you've been doing and in particular in your training data set. Do you have a series of snapshots in time of crops at different locations? Or are you just taking one sort of current slice in time of where crops are now? That's how we're training the data on kind of the current slice of time. But that's actually a really good point. We could kind of incorporate past agriculture. You could go back and use past ones. Absolutely. And then also going forward, have you looked at predicting how things will evolve over time as opposed to just taking one point in the future, like 2070? Yeah, yeah. We're sort of limited by the data that we're using was kind of aggregated to these 20-year periods. So we're looking at, you know, 2030, 2050, so on. But we can actually get access to that kind of daily time step data. And I mean, we could theoretically predict on an annual basis, for instance, where the zones might be, whether that is useful or not. We're not sure. I think we're picking like kind of a decadal time step and just running the model for a couple of minutes. We might find a way to see the things moving. Yeah, we're definitely getting there. At this point, it's just a challenge of just dealing with all this data. So, you know, climate data is four terabytes and any derivative works thereof would be exceeding our capacity at this moment. So, thanks. Thanks for your talk. I was wondering if your scripts have any interaction with the GCM servers, FTP or HTTP? Or are you assuming when you start this process that you have the scenarios you want and you have the data already locally for the downscaled data that you have? In terms of the scripts that we're using, yeah, we assume that you've already found a source of downscaled climate data that you're comfortable with or multiple sources. And you have already processed them to the same spatial extent. So, yeah, there's a lot of pre-processing that is assumed before you go into something like this. But, yeah, that's a good point about the different GCMs. I'm showing results from a single one here. There's sort of this emerging theme in the literature that there's no sense in picking a climate model and saying, that one's right. All models are wrong. So, there's this kind of growing notion that you want to run your model on multiple different GCMs and be able to look at where they agree and where they disagree and use that as sort of a measure of uncertainty. And so we've done some work on that in the forestry realm. And it's really interesting. You see these core areas where 11 different climate models all agree. But then there's these fringe areas where they don't. And so that's, you know, when you're making the decision, those sorts of uncertainties definitely play in. You mentioned that you have a lot of data like four terabits data and running these models against a lot of GCMs. Are you running into any sort of processing challenges and how are you overcoming those challenges? First part, yes, we are running into processing challenges. I think a lot of it is, I did, I actually said in my abstract that I was going to talk about performance implications. I couldn't fit all the slides in 20 minutes. But yeah, there is definitely a lot of tuning in of the algorithm that you can do in the scikit-learn end to get reasonable performance. Some of these algorithms are really memory hungry. So that's the limitation that we've run into so far is just RAM. But then in terms of disk, you know, honestly the four terabytes refers to sort of the daily time step data for all these different GCMs. And really what we do is kind of preprocess using, you know, NetCDF, you know, NetCDF and Python. We do a lot of preprocessing of the data to get it down to a reasonable time aggregation. So the data that we're actually working with here doesn't, isn't all that on us. It's just the time it takes to run through some of these, some of these algorithms and the RAM that's required is just through the roof. So there's some tuning you can do on that. Have you looked into a distributed processing to handle some of that memory issue? Yes, we have. There's not really a good way to parallelize a lot of these scikit-learn problems themselves. And so what we're doing, you know, we sort of do the dirt sheet parallelization where, you know, if you want to do 2050 and 2060 and 2070 predictions, just run them on three different nodes. So that's the sort of parallelization we were after at this point. But parallelizing the algorithm itself, we haven't, that's kind of a deeper problem with scikit-learn that I haven't really figured out. But there's some really cool machine learning algorithms from coming out of Google actually that are working, that work on like Hadoop file system and things like that. So I haven't tried them. So, I don't know. Yeah. I'm wondering how you narrowed it down to those nine variables, I guess, as being the ones that matter in suitability? Yeah, so we did sort of a pseudo stepwise approach, these right here. Yeah. Yeah. We did a stepwise approach where we, I think we considered probably 30 different variables and we did a stepwise approach to sort of bring down the ones that only explain, they explained the most amount of variance. So in other words, the ones that we left out really didn't have that much explanatory power. So I'm just wondering about some more nuanced ones that seems like they would have a large effect in reality. Like for example, you can have the same mean precipitation, but if it's bigger rainstorms coming less frequently, that seems like that would still have a dramatic effect on suitability. Absolutely. Yeah, I think that's sort of one of the emerging fields in climate data is how, is temporal aggregation and how do we take these daily or even sub daily time step data and how do we sort of aggregate them into biologically meaningful variables. You know, I don't know if a lot of tree species, when we're working in the forestry realm, there's a lot of tree species require a certain number of days of frost or conversely can't handle a certain number of days of frost or require a certain number of days of below 20 degrees to kill off their primary pest or something like that. So there's a lot of very species specific biological indicators that can be pulled out of this data, but it's the space of variables that you can pull out of the time step data is really large and there's not a lot of research been done as to what the best variables are. Certainly these right here predict, I think the mean monthly summer precipitation, for instance, is a good indication of how much does it rain during the growing season, right? And so that's sort of getting at some biological response there. I just wondered with some of the challenges around the scale of your data, both spatially and temporally, have you looked at using or implementing some database technology? Some database? Technology, so maybe I missed it, but so you've got lots of russes which are stacking up on disk both input and output, have you looked at building a database to do that instead? So just store everything in a database? Yeah, I mean we use HDF and net CDF5 file format which in my mind is a database to do kind of the climate data processing. Yeah, and I think I've never really brought into rasters in the database personally, but I don't know. It could be, I haven't seen the advantage over a disk based approach. But if it's gridded then it doesn't need to be a raster because essentially you're working with Vector in the Vector space for your analysis. True. Raster is significantly faster I think for certain types of analyses, but yeah we could definitely do that. And I think for a lot of the work we've done in the forestry realm, we're dealing with point observations where you run out and actually doing plots. So a lot of that stuff, yeah, we're using a database for that, we're using post GIS for that. I may have missed this, but how many variables did you start with? I, you know, honestly I don't recall exactly, but we had over two dozen variables, probably closer to 30 that we looked at. So these were the significant ones. Thanks Matt. You say that you're going to publish this in Pure Review Journal? I hope so. Oh great. Yeah, not yet. Yeah, I think we have a lot of work to do, you know, sort of on the data side and the interpretation of the results. You know, a lot of these results, I say preliminary results, very heavily underscore preliminary. It's, you know, we're really, we've really been focused on getting the analytical technique down and getting the data in-house and that sort of thing. And so we, the next step really is to start producing meaningful results, right? And once we get those, we'll start thinking about publishing. Code is open source on GitHub. I'll head it up here. Yeah, the pie and put code is on GitHub. And then, you know, honestly it doesn't really do much. It's, I think it says so right in the read me. It says it's just a bunch of functions that kind of do a little bit of data manipulation that make this kind of thing just a little bit easier. And so the real work is like it learn and Rust area and those sorts of tools. Last one. So just taking a course look at some of your results, it seemed that things that were clustered area became more segmented and fragmented. So questions on that are, is it a degree of fragmentation that impacts the scale of agriculture that's done in those areas? Or does it not impact that? And then the other thing is, are these areas being spread into areas where it's not feasible to do agriculture like the National Forest? Right. Yeah, that's a big, you identified the two biggest problems, our current approach. Yeah, we're working on basically masking out non-agricultural land. Even though it is feasible for climate for non-agricultural area to shift to agricultural area, we're really focused on ag lands. So to answer the second part. And then the first part about the spatial heterogeneity of the data, you know, I had mentioned kind of in passing that each pixel is independent effectively. And independent observation and reality, we know there's some sort of spatial autocorrelation between agriculture zones. We're not going to see speckled agriculture zones across the country. So there's a lot of opportunity to sort of incorporate those geostatistics into this. We haven't figured out how to do that. But yeah, ideally we would be sort of bringing in some sort of measure of clumpiness to prevent that from happening. So that may or may not represent a real result. It's also important to keep in mind that a lot of the shift of zone, you know, from zone A to zone B, may not actually be all that great. It could be that the same crops are grown and the same agricultural practices exist. It's just, you know, a slightly different temperature. So there's a lot of interpretation to go around. But I think we're sort of shifting away from the idea of using zones themselves and more towards kind of predicting productivity of individual crops. So that's where we're thinking about going. So that would get around that problem. Thank you, man. Thank you for early questions.
|
As the field of climate modeling continues to mature, we must anticipate the practical implications of the climatic shifts predicted by these models. In this talk, I'll show how we apply the results of climate change models to predict shifts in agricultural zones across the western US. I will outline the use of the Geospatial Data Abstraction Library (GDAL) and Scikit-Learn (sklearn) to perform supervised classification, training the model using current climatic conditions and predicting the zones as spatially-explicit raster surfaces across a range of future climate scenarios. Finally, I'll present a python module (pyimpute) which provides an API to optimize and streamline the process of spatial classification and regression problems.
|
10.5446/31676 (DOI)
|
patient developer at the University of Wisconsin-Madison's Geoscience Department. I'm also presenting with Rishana Mead of the Applied Population Lab at the University of Wisconsin and Rich Donahue of the University of Kentucky who cannot be here but he's watching the livestream. Hey Rich. Today I'm going to be talking about adaptive maps and how we as a community can not only improve the products we create with them but hopefully also contribute something back to the wider web development community as well. I should also note here that we're not proposing any specific solutions for creating adaptive maps but are trying to start a conversation about how we can improve these products and hopefully get some ideas from you about how to create these. But first because there really isn't a definite definition of what an adaptive map is we're going to start by getting an idea of what adaptive design is, how it differs from responsive and why it matters to mapping. So responsive is always adaptive but adaptive is not always responsive. Responsive is just a piece of the adaptive puzzle. Responsive web design is largely about optimizing the layout of a page and delivering the appropriate media to the user and you may have used frameworks like Bootstrap or Foundation to get over the hump and make this process a lot easier. Adaptive design on the other hand deals much more with altering the context of the material, the purpose and the functionality of a website in order to meet the needs of the users. Of course optimizing layout is still always a very important part but you can only go so far as to deliver or you can take this to the point of delivering an entirely different experience to the user depending on the size or capabilities of their device. A really nice example of adaptive web design is the Lufthansa website. As you can see their desktop website is a fairly normal airline website. It's centered around exploration. For example you can see the route map, find flight deals, exploratory options, book travel. However, if you go to the same URL on a mobile device it's a very different experience. First thing you see is a news article about how there's impending pilot strikes which would be very important to you if you're out traveling. Additionally, the most obvious features are checking in for your flight and checking flight status which if you're checking their website while you're traveling these are probably the things that you want to do right away and they're super easy to find. One of the things that makes making adaptive maps perhaps more rich is using your user's location. There's many different parts of adaptive map design that kind of fit into creating one but user location is something that you can really use to create a nice custom experience. For example you can use something like the OpenStreetMap overpass API. If you're not familiar with it basically what it does is it allows you to query the OpenStreetMap database and return vector features about what's there. You can imagine how that might work. I coded up a little example that I'm not going to do a live demo for you but it uses the overpass API to find the user's location and then find how many roads are in your area and depending on the density of roads it'll deliver either a road map or a terrain map depending on where you are. The user may not actually want a road map or a terrain map in those cases but if you're hiking around and there's only one road in your area there's a good chance that terrain map is going to be a lot more useful to you than a road map. The point isn't that you're giving the user something different and they can't change. You're more or less just trying to anticipate their next action. You're not limiting their choice in any way, you're just anticipating their choices. Another example that Rishana brought up the other day was you're out driving around and you're in an unfamiliar rural place and so you pull out your phone to check where you are. Oftentimes you're at such a large scale that you can't actually tell where you are. You just see fields around you. But there's no reason that we couldn't use something like the Overpass API to just center yourself on your location and also the nearest town or village to give you some context of where you are. That's something that could be very nice because if you're in a rural area you don't have a lot of bandwidth, you can often sit there trying to figure out where you are for a long time to get some context. So again, just anticipating the user's next actions using user location. Another component of adaptive design is adaptive representation, talking about map symbology and projections. Perhaps the coolest example of adaptive representation is Bernhard Genny's adaptive composite map projection. It's really cool. So basically when you start out, or as you navigate around the map, it'll change the projection depending on your extent and your origin. So it dynamically picks the optimal projection for whatever slice of earth you're looking at. Although it's a really cool proof of concept, unfortunately it hasn't been implemented in some of the more popular mapping APIs. So it's kind of limited to a demo right now, but hopefully someone picks it up and runs with it. There's been a couple attempts at doing it in D3, but nothing that's really caught on heavily. Also the new Google Maps takes this approach a little bit if you've seen that. If you go to the satellite layer in Google Maps and then zoom out, it'll zoom out to a globe, which it's not exactly an adaptive projection, but it's kind of getting at that same concept. Another really great example of adaptive mapping is the dynamic hill shading example for Mapbox. In vector tiles, they've developed methods that allow you to dynamically change the asmuse, the altitude, the depth, the shadow, and the highlight of the lighting on your terrain in the browser. Although I haven't seen an example of this in the wild yet, it's very promising technology and you can imagine the sorts of adaptive map experiences you'll be able to create in the future. Another example that demonstrates this is using different scale factors for a proportional symbol map. Rich coded up this example the other day to show how you can do this. It demonstrates both an application of what we're suggesting adaptive cartography is capable of and also new ways of bringing traditional cartographic practices into a web environment. A leaflet circle marker feature, it provides a fixed radius circle that allows proportional symbol maps to work across a variety of zoom levels. It doesn't really work for visual comparison at other zoom levels. The leaflet circle feature, it maintains a constant radius in terms of geographic distance, but it's less useful for proportional symbol map as well. But if you apply some conditional statements, you can easily customize the proportional symbol map to different zoom levels and perhaps even different screen sizes. Another simple tweak you can make for minimaps would be to change the minimum symbol size if we know the user is using a touch interface. For example, in a mouse and keyboard setting, sometimes it's not as more aesthetic to have little symbols that you can expect the user to mouse over and interact with. But once you move over to a touch interface, those tiny symbols can be very refreshing for those of us with wide fingers. And so a really easy thing to do would be touch devices, follow Apple's design principles, and use a minimum of like a 44 by 44 pixel symbol for all of your UI components, for map symbology to make it a little bit more easy for your user. Another place that adaptive mapping can kind of take place is on the server side. It's not necessarily exactly just a front-end problem. One example is the paleobiology database API. It's one of the projects I work on at work. I don't handle the API per se, but the other things. But one thing that API does do very nicely is that when we have millions of fossil collections all over, and so if you're to make a map of those, you can't really load them all at once. So what we do is we cluster them into different bin server side. So we have six degree bins here, two degree bins, half degree bins, and then of course no binning. The purpose of clusters, any time you use them, is generalization, right? And so it doesn't even make sense to request all this data, cluster it client side, and then you're hiding all the data that you just requested. So the advantage of doing the server side is that you're only requesting exactly what you need and you're still providing a generalized map for your audience. The other thing is that with this approach you can write other services. So if I was to click on a cluster, it's very easy to then make another small request to the server, ask what collections are in there. If you want to know more about a collection, very easy to make another small request to find out more information about that. So you're not really losing any richness, but you're making your map a lot faster and it's another way you can make your map more adaptive. Some other adaptive examples. Brad Frost, he's a web designer who is one of the only other people I could find who wrote something up about the concept of adaptive maps. And his solution, proposed solution, I mean this was two years ago, but two years ago his solution was to default to a static map on mobile and an interactive map on a desktop. I'm not sure I completely agree with that approach, especially now because you can still provide a very rich experience on mobile. I think he was thinking that interactive maps can be very clunky on mobile devices. But it's an interesting example of someone else giving the problems of thought. Another example to bring it kind of all together is this example that Roshana coded up. We haven't really seen them, all these little components I've talked about come together in one and she created this example, protest map application. Protests are really a great example use case for an adaptive map, right? On a desktop a thousand miles away you might want to know where protests are happening, explore them, get information about them, but on a mobile device you're more likely to want to know where they are right now, how close I am to them, perhaps report one. So on the left side we have the desktop version and on the right the mobile side. On the desktop version you can search by different polygon, you can filter them in many ways, query them. Whereas on the mobile version you have more succinct interface. You remove the zoom buttons, you make it very easy to add a protest and also you can still explore it and filter it, but it's a much more concise experience for the immediate needs you'd have while on a mobile device. And this is really a, it demonstrates a common theme with adaptive mapping and that's that mobile use usually has more immediate environmental needs that can be addressed and it requires a tailored experience for that. So this leaves us asking where do we go from here? We have all these little bits and pieces, we don't really have any solid solutions for anything. Obviously one solution would be to create different pattern libraries for these things. We're used to using little JavaScript libraries for everything, why not one for adaptive mapping? You could in theory package things up like this for example the symbology, the adaptive composite map projection and those may be very useful, but of course that comes with more maintenance trying to generalize them for many, many purposes which may lose some richness. The other solution is maybe this concept of adaptive mapping is just a bad idea which it's very possible. Depending on your situation it might make a lot more sense for you to create a separate desktop application or a web application and a dedicated mobile application that adaptive maps kind of by definition are compromised, right? You're never going to get all of the richness in an adaptive map that you kind of cross a tailored map for a specific platform. So I guess it's a personal decision you have to make depending on your team. If you have a bunch of web developers and no native app developers, it's pretty easy decision. It makes a lot more sense to make an adaptive map. However, if you have a larger team that has dedicated web developers, native app developers, maybe a native app is a better bet and it's not worth going through the trouble of trying to create an adaptive map. So it's really a personal choice of what your needs are. That's all I have. Thanks for coming. I'm honored that so many of you came and can find the slides and the code examples at those URLs. All the slides and code are CC0, so use it for whatever you want. Thanks. I have some time for some questions. Anyone have questions? You're distinguishing between desktop and mobile, but isn't the differentiation really about the screen size and not really the mobile aspect of it? Perhaps. Those lines get kind of blurry when you get into laptops with touch interfaces, various size tablets. I don't know. I find those boundaries to be kind of all over the place. I think we still have this large divide between mouse and keyboard environment and kind of everything else. Mobile devices are very, very broad term these days. That could mean your laptop even, I guess. I'm trying to draw a line there, but it's really hard to. I guess you could make a better argument that there's more of a divide between desktop operating systems and mobile operating systems that you can design for. Thanks. One second. A library of uses, use cases. Just to define more what users, how do users use certain maps? It's not devices. I mean, just the blur between device specific. I'm sorry. You can have categories of uses. Make it more clear. The use cases. More examples of how you'd use an adaptive map. I guess another example, a project that I worked on was this mapping application for the paleobiology database. We didn't even think of creating it as an adaptive map experience at first. It was just a normal interactive map that we hoped would scale to different screen sizes. What we found was that we are out having a beer and we had a question that we wanted to answer and pulled out our phones to answer it. It was a really clunky interface for answering those questions. So we figured out that when you're on a phone, the application serves a fundamentally different purpose compared to when you're in a more desktop, larger screen environment. I think oftentimes larger screens are more conducive to exploration whereas mobile devices are more appropriate for answering immediate questions. That's why people pull phones out of their pocket. Hi there. You mentioned anticipating what a user wants to do when they're using the application. Do you have any words of caution that you would say to that? Because trying to anticipate what a user is going to do is often a very difficult thing. It's insanely difficult. That's why I was trying to get at the point of you're trying to guess at their next action but not restrict them as well. So that's why I really, really like that Lufthansa example is because they change the focus in the context but they don't really remove any richness either. You can still search for travel, book it, but you're making it a lot easier to accomplish tasks that you think they're going to want to accomplish immediately. So I guess to answer your question, it's about making the things they need more immediately available as opposed to limiting their choices. Anyone else? I just wanted to point out that the vector tile movement seems like it's really going to be the driving force behind this kind of design. It's really exciting. I'm looking forward to seeing what we cook up. It really opens up a lot of doors for what we can do. Anyone else? Okay. Thanks so much.
|
In recent years, the web design community has moved quickly to accommodate the various devices and methods for accessing web content. The FOSS4G and wider development community have responded to this paradigm of adapting the layout of content to scale to the device of the user by creating and leveraging tools such as Leaflet and D3. However, there remains a lack of knowledge, understanding, and conversation about what it truly means to create a map experience that meets the present needs and expectations of the user. Designing an adaptive map should go beyond simply fitting it into a responsive layout. User variables, such as the mode of interaction and location-based needs, raise map-specific UI design questions that this community is uniquely positioned to answer.This talk will explore what it could mean cartographically and experientially to adapt all aspects of the map experience to the needs of the user using principles already embraced in other communities. Our goal is to provoke a wider discussion of how we, as a community, can work toward these objectives. Regardless of expertise level, anyone who is involved with the creation of interactive web maps has inevitably come across the problems associated with, and will benefit from involvement in this conversation.
|
10.5446/31677 (DOI)
|
Good afternoon. Thanks for coming. My name is Joe Roberts. I am from the NASA Jet Propulsion Laboratory, just like the presenters before me, or JPL, it's managed by the California Institute of Technology. I work on a team called the Global Imagery Browse Services, or GIBs, it's a collaboration between JPL and also the NASA Goddard Flight Center. So today I'm going to talk about on Earth. So just brief outline, I'm going to explain what on Earth actually is, a little more details about GIBs, get into some more of the technical stuff about on Earth, a little look at performance and metrics, and if there's time maybe do a little preview of some of the client applications that you could build using the on Earth server. And at the end I'm going to just briefly go over our open source project. So what is on Earth? It's basically an open source image processing and server software package. In other words, it's a TAL server. It's intended to provide an out of the box solution for generating geographic imagery. It's a little bit more than just a TAL server, we also include components to build TAL pyramids from Global Mosaic and then store those pyramids in this image archive. So the intent of the archive is to be able to serve TALs to some client application using standard web protocols. For our case we use WMTS, TAL WMS, or KML. TAL WMS is actually an extension of WMS. It was developed at JPL a while back, a long time ago. And it's just a way of creating a predefined set of cache tiles. So when you do a WMS request you already retrieve a set of cache tiles. And I guess the key selling point of on Earth is that it's designed to be lightweight and very performance driven. We use this special file format called the Meadow Raster format or MRF. It's essentially, it's not an image format per se, it's an image container format. And it's, we use this because it has very fast access and we can, we're not limited by image size or resolution. We use a lot of GDAL to generate the MRFs. We have this MRF driver, custom built driver that we use to, you know, run GDAL translate from one image format, JPEG or PNG or whatever, and be able to generate these TAL pyramids into the MRF format. For the actual image server it's essentially just an Apache module. We call it Mod on Earth. And on top of that we have some configuration tools to just kind of help set up the server. Just a little bit background about on Earth. It's been around for a while. It's, its development began originally at JPL in the early 2000s. The lead developer at that time, Luchin, he's now at Esri. But luckily being an open source project we, we still get some pretty good contributions from, from all around. It was formerly known as a TAL WMS server. This is back when, at the time we were just using TAL WMS but now we've expanded to the more commonly used WMTS specification. Some of the other things on Earth has been used for it includes planetarium shows at public museums. It was the first image server to serve out 15 meter global landslide data or imagery. And for, for those of you who remember NASA WorldWin, it was kind of a predecessor to Go-Earth. On Earth was used as the actual image server for, for that application as well. Outside of planetarium I guess there's, we, we have an image server for Mars and the moon and respectively we call that on Mars and on moon. Podac which is the physical oceanography distributed active or archived center is based at JPL. They have this state of the oceans tool to present imagery of other worlds oceans that's also built on this, on their server. Just a couple years ago we started this project Gibbs back in 2011 and on Earth was selected as the primary image server for, for that project as well and on top of Gibbs the client application most commonly used is NASA WorldView. And finally just last year we released a software in open source on GitHub so it's really available and anyone could go up and access and download the software. Okay, so a little bit more about Gibbs. So NASA as you all know has several Earth observation satellites constantly taking measurements of the Earth and we have this really rich and vast image archive. So several useful and assets available to general public scientists and so forth. So the purpose of Gibbs is to provide a full resolution image archive and access services to get to all these different data products. Currently there is about or over 75 global, so that products most of them are available within four hours. It's, it's part of this Earth observation system data or Earth observation system data and information system as this and because it's available to the public we need something that's really scalable, fast, responsive and so forth. So for more, for more information there's a link down there. We also have another presentation, Gibbs related at 4 p.m. that's going to provide some more of the context and historical background of Gibbs. Oops. So I guess one of the key selling points of on Earth is speed. So the key to the speed and storage is this metaraster format that I mentioned earlier. So most of the overhead in actually getting to the imagery or not most but much of the overhead is with, is having to do with the file system. If you have images sitting on, sitting on a machine somewhere, just a bunch of PNGs or millions of tiles just sitting on file folders, that introduces some file system latency. So to reduce that we put everything into one big data file. So, so when you're accessing the data you just have to go to one point without any heavy file system operations. This has been around before. There's, I mean nowadays there's different image file formats or container formats for tile pyramids such as MBTiles and Geo package. So it's been around before that. We didn't really consider anything else during that time. Let's see what else. So with MRF we can support, oops, sorry about that. With MRF we can support multiple projections. For Gibbs we use geographic lat long, Arctic and Antarctic polar stereographic and also Webmer K there for Google maps, Bing maps and so forth. Some of the compression types use. This is for when you pull out the image or you could pull out either JPEG PNGs or even just binary data. I don't know if any of any web clients actually use TIFF but primarily it's going to be JPEG or PNG. I mentioned earlier there's a driver for GDAL that we use to generate the MRFs. And to do that we basically just run GDAL translator. We take an image composite and just convert that to an MRF and on top of that we run some GDAL commands to build up the different zoom levels of that pyramid in order to get the final product. MRF is not itself by itself a single file. It's actually a composed of three different files. The first being the MRF header. It's just an XML based file containing the metadata for the MRF itself. There's also this index file which is basically just a lookup to somewhere in the binary data file which contains all the images. So in the metadata file it's pretty basic actually. It contains just the full base resolution, some of the compression information, the size of the tiles we normally use 512 by 512. There's also additional elements besides this. You can have information about the color maps and other projection information as well. So the data file is pretty simple actually. It's just a bunch of JPEG images or PNGs stitched together side to side starting from the full base resolution at the bottom of the pyramid and moving upward. Each block itself is the actual JPEG image. So once you pull the data out you actually get back that JPEG or PNG. There are some drawbacks to this. Updates are only through a PENs. So there are ways to go back and update some of the data but it's a little more complicated to do that. So the index file is just a binary file containing offset and size. So when you do a lookup for a tile it works really fast. You just look for the roll. So in the top pyramid you have 00, then 01, 02 and so forth. You just go down the roll, look for the offset and then read for that particular size of the tile and you get back that imagery really, really fast. So moving on to the actual, what I call the Honor server. It's just an Apache module. So basically you can run this software with any Apache server. It's just a plug-in that you can configure. It contains the cache metadata. It's basically the information we saw in that MRF header that's stored on the server and we were able to read information about that layer really quick. There's, if for cases where there's no data, so much of the earth if you only have a layer that's just the oceans or just the land you might have some empty data. So we have the special index 00 that just returns the blank empty tile. I mentioned earlier about the common protocols. We have the OGC WMTS, Tiled WMS and KML. If you're familiar with any of those there is get capabilities that a client application can read and figure out what layers are available and so forth. This isn't actually done through the Apache module. We have a separate CGI script that will return you the get capabilities information. We don't exactly follow the specification for WMTS. We have this added time extension. Much of our data has historical imagery so we have to go back in time. So we had this little time snippet if you will to be able to create back for days in the past. With KML, just a little pointer is you always get back a global imagery. So when you pull KML files just for the entire earth whereas with WMTS or Tiled WMS you could pull back the, or you can get the individual tiles. The whole purpose of the module is really just to translate HTTP requests and be able to read MRS and return that to the client application. So a little bit of information about the performance. This is a sample from MODIS data. I guess the key thing to get here is it's possible to get speeds of over 1,000 tiles per second. This is without network latency. So these numbers are a little bit dated. These are from 2011 back when we were doing an evaluation for Gibbs. I'm not sure how this compares with other tile servers nowadays. It would be nice if someone could maybe check it out, give an evaluation and there's better technology out there. I'd love to know about it as well. So, yeah, encourage just some comparison and see what you guys think. And just to see how on earth is working on the wild, these are some sample metrics from Gibbs. There's about 40 million requests two months ago. So there's a pretty decent size of visitors hitting the server. It's not really putting any dent into it. So that's a good thing. Currently, there's over 90 image products. And with Gibbs, we only have 250-meter data. But in the past, we've worked with Landsat, 30-meter, 50-meter data before. So, to help configure this server, we have a couple other tools just to make life a little easier if you're using on earth. So there's this MRF generator. You can use GDaw to basically generate all your MRF files. But this MRF generator abstracts all that GDaw command stuff away from you if you're kind of shy about using that stuff. It also allows for automated processes. We use XML configuration files. So another system could kick down some XML files and be able to generate the imagery for you. So there's also this on earth LRF configurator. We use that to basically generate the server configurations, the metadata for get capabilities and so forth. And a couple other tools, there's this on earth legend generator. So you can take your own color map. We have our own Gibbs format for color maps and just be able to generate a nice little color graphic using that tool. And there's also this metrics tool, which basically it just converts Apache logs into a format that's useful for if you're recording metrics. And this graphic here kind of summarizes how the system works. You prepare the imagery. This assumes you have the images available or global mosaic and you're able to turn that into an image pyramid. And once that occurs, there's some configuration involved with the product layer. And you load that into the server and from that point on, you can serve out the imagery using these common protocols. So for Gibbs, we use on earth in conjunction with the software called tie. It's an image management and workflow system. It helps automate and manage our MRF generation. Another really great thing about it is that it keeps track of the metadata and this is good for data problem non-switches, important for scientists to be able to go back and track where the data has come from. Another great thing about tie is that we can use it for searching and be able to search on the metadata and so forth. This is also in development at JPL. It's not open source, but something good in the amount. So I was going to give a little preview of some of the client applications. It looks like I have some time to maybe check these out. I wasn't sure about the network speeds here. I had a little bit of trouble, but let me see if... So this is worth you. We use this as the Gibbs reference client. And what on earth is really doing is serving the detailed imagery here. If I hit refresh, you can actually see the imagery or the tile is being loaded. But like I mentioned before, I think there are some network speed issues. So I'm not going to do full demo. There is another presentation coming up at 4 p.m. Exposing NASA's Earth observations, I believe it's called. And there's going to be more in-depth talk about worldview. So you can actually see the tile showing up here. That's basically what on earth is doing. There's a couple more other client applications. This one is using Google Earth. You can't really see it on screen here. Just do the screen resolution. But then this is showing sea surface temperature. And there's also a 2D version. This I believe is using leaflet. And showing the same thing, sea surface temperature. There's other satellite layers that you can look up and so forth. One thing I forgot to mention earlier, this on earth server only serves up raster imagery. That doesn't stop you from using a separate server to serve up vector layers. I believe this has some vector information somewhere. We also have this lunar mapping and modeling portal. It's ticking on earth to the moon, I guess. So we have... This is pretty cool. Let's see. Just imagery back to the Apollo era. It's stark or archived imagery of the moon. So it's just pretty cool. So those are just a few couple of sample client applications. Here's some links here. If you want to check it out in your own free time, see what they're all about. So here's a list of other known support clients. Showed off WorldView which uses open layers. There's leaflet, Google Maps, Bing Maps. And of course, this is also available for GIS clients. SRE, the SRE ArcGIS online actually has the gipslers available through their portal. So you could use that for a variety of GIS applications. And of course, there's mobile clients and script load access which is useful for if you want to suck down a bunch of tiles and build a clough free map or something like that. I believe Mapbox did that for their own version of clough free maps. So a little bit about our open source project on GitHub. All of this code is available online. There's a couple other projects as well. There's WorldView and some samples of how you can build your own clients as well. The primary source for the on-earth software is on this NASA Gibbs on-earth site. We're also working to separate the MRF code so we could build that into its own standard. There's some specifications. It's not well documented at this point, but there's some specifications up there to provide you with a little bit more information. Most of the code is in C or C++. Some of the non-performance centric software we use Python just because it's easier to develop and use. And be nice for the open source community to check out the software, provide some feedback whether it's good if you like the performance or if there's some other software that maybe could be a benefit. And also if you participate, there's also the sense of helping out the Earth Science community and aiding, providing some aid to NASA and pushing technology forward. A little bit preview on the future developments. We're working on granular imagery. The challenge is how to be able to serve granularity on our server without sacrificing any performance. There's a couple of items such as multi-dimetry and several improvements to the MRF specifications. We're getting some help from Edzry. They're doing some pretty cool stuff with MRF, making it more cloud-friendly, I guess. And also, oops, I guess this kind of, I don't know what just happened there. So that's basically the end of the slide, anyway. I do want to advertise there's an open position. I don't know too much about it. Yeah, yeah, pretty secretive. But, yeah, that's basically it. I got the email, I'm good. So, I'll leave off with a couple links here. There's just links to our GitHub page. How do you use GIFs? Basically, there's an API. If you want to contact anyone at GIFs, there's the email address in my personal email address as well out there. So, I guess at this point I can take questions. I assume that the serving is all done from those pre-generated tiles. Like, when you talk about the color map, that's not applied on the fly. That's applied during generation of the tiles? Oh, yeah, yeah, that's correct. All the tiles are pre-generated. And during that pre-generation process we pass in the color map and that sticks with the imagery. So, for instance, if you just generate JPEGs, then that's the only thing that server can serve. Is there any interest in, like, doing dynamic conversion? I assume that kind of defeats some of the performance purpose. Yeah, there is interest, but that's a problem as performance. Might be slightly unrelated. Is there a feed of the different data currencies of the different products within the viewer? In other words, how do you best monitor the freshness of the different data that's available? I'm not sure if I followed that question. So, the tiles, the imagery, the tiles, you know, the recency, how do you best evaluate the recency of the different tiles in the imagery? So, for JPEGs anyway, we serve up-daily imagery. So, it's basically just timestamp into that data. There's not really a way to, if you're looking at sub-daily imagery, to see how, there's not a timestamp for how recent that imagery is. Are you guys just showing the EO bands, or is this any other other derived products, like CIS concentration, cloud cover analysis, et cetera? Is that going to go into GIBs, I mean, that sort of thing? Or is that a question for the other topic? Yeah, there is. So, for GIBs there is, if I go to worldview, there are several layers of science products on here. So, in this one, you know, oops, you actually, you guys don't know. Let me try to get here. So, yeah, there's different size parameters that you can get. The list of compatible clients is quite a few. So, yeah, there's a lot of different Mittens and biggestiana rendezvous between**** spilled stits and You mentioned that the WMTS had a little extra thing for time. Is that above and beyond sort of the WMTS standard? And then does the second up question is, does the WMS support the time coordinate or the time parameters? So WMTS is something we added to, it's not in the WMTS spec, it's something we added and I think, I'm not a WMTS expert but I think there is a time extension. And so that just follows along. So your server support that part of the WMS? Right, right, right. Did you put up the other slide that you're going to start with? Oh yeah, yeah. I was just wondering with the MRF format, if imagery is updated four hours from live and the entire format must be appended for updates, how does that happen and how have you dealt with that? Does the entire file need to be regenerated when the data is updated? Yeah, pretty much at this point. About how long does it take to generate the MRF file? That totally depends on the size of the imagery. If it's kilometer size, it's pretty quick. If you're dealing with say three million landslide data, that could take hours. So there is some limitation there if you're dealing with really high resolution imagery is going to take a long time unless you have some really beefy computer power to handle that. I think all the questions are done. Okay. Thank you very much. Thanks.
|
OnEarth is an open source software package that efficiently serves georeferenced raster imagery with virtually zero latency, independent of image size or spatial resolution. The key to OnEarth's speed lies in the use of a unique, multi-resolution file format (Meta Raster Format, or MRF) combined with supporting open source software packages such as the Geospatial Data Abstraction Library (GDAL) and Apache to serve out images via web service protocols such as Web Map Tile Service (WMTS) and Tiled Web Map Service (TWMS), or visualization formats such as Keyhole Markup Language (KML). The emphasis on performance and scalability were strong drivers for developing this specialized package versus using existing software.While OnEarth is currently deployed operationally at several institutions, powering applications across the Earth Science and planetary spectrum, its active development is managed by NASA's Global Imagery Browse Services (GIBS) project. The purpose of GIBS is to provide a complementary historical and near real time (NRT) image archive to NASA's Earth Science data products for a multitude of uses: GIS ingestion, first responder and NRT applications, data search and discovery, decision support, education and outreach.Released as open source to GitHub in October 2013, NASA is encouraging members of the open source community to participate in the evolution of OnEarth—in the roles of developers, evaluators, and users—as a means to vet and enhance its capabilities. This leveraging of efforts not only benefits those who intend to use the software for their own endeavors, it effectively contributes back to NASA by strengthening GIBS and promoting the use and understanding of NASA's vast archive of science imagery and data. Several tools, including the GIBS reference client, Worldview, will be demonstrated as part of this presentation to illustrate the breadth of application and consistent image access speed across installations.https://github.com/nasa-gibs/onearth
|
10.5446/31678 (DOI)
|
My name is Matt Prio and I'm talking apparently quickly now about AngularJS and using it in mapping frameworks. So there are several wrappers out there for different mapping frameworks and we'll talk about them. And there's the URL if you want to follow along for some reason. So first of all, what is AngularJS? It's a framework by Google and it's markup centric with a model view watcher framework rather than model view object or model view model. So it watches for the two-way binding and if you want to learn more about it, there's lots of resources out there. I'm not going to go into detail about Angular itself except for it to show. This quick example that they show on their like getting started page is a simple example which I didn't really think was all that simple. So here's the markup for it and here are all the little Angular template strings or special attributes and then here's the script that controls it and if you look at this slide and you figure it all out, you'll have like 90% of writing of using Angular in a web page yourself. But you got to figure out how all of these things kind of go with each other and stuff. But it's really powerful and it is expressive. But some people don't really like it. They get really cluttered things up and I'm kind of on the fence about that either way. So what you could have done instead of that was I could have written a directive like a to-dos directive and then declaratively it could have been really simple. It could have been to-dos directive and then a list items individually or if you had a model or some JSON to pull it from, you could have even populated it with a little template loop. So if you need to have required JS and AMD loading with your Angular, it gets really complicated and most people they just include the scripts in order and then have a build step or something else like that that takes care of making sure the things are required in order. But if you have to use require, there's actually a good example from Patrick Arlt from this March's Devs, every Dev Summit and he shows how to kind of encapsulate everything in require and then eventually require in your application and start up your page dynamically. So I mean looking at leaflet, Google Maps, open layers, two and three, D3 and the ArcGIS API. So these basically break out into two different styles. There's a declarative where most of the options are expressed there in the markup and then there's a script type which is it's pretty much just pointers to objects in the scope that have the information. This script type has become more popular lately and I think it's been more obvious when you look at it. So if you're trying to do something for people that you want them to just edit or include a little bit of markup, maybe the declarative thing is good. But for anything that's got even a little bit of complexity or any little bit of specificity or you want to be able to customize it, the script way is going to be a lot better. So here's the declarative and you can see so they've got like our controls and the different layers, what type they are, some options for them, all kinds of stuff there. And here's the script style which if you look in the markup is really simple. It's just got like markers and Oslo Center. And so then you look in here in the scope, that's Oslo Center and there are the markers. And so you could make that a lot more complicated and have functions and stuff like that which is probably where you want to be doing application logic anyways and JavaScript not in markup. So it's like Google Maps. Really, I found two ones that people seem to be using and that are documented well. And that's the Angular Google Maps which is from the Angular UI team. They originally had an earlier version that was more declarative but they've kind of, this other one is kind of taken over. And then there's this ngMap which basically retains the old declarative style of Google Maps UI. So here's the script style, the Angular Google Maps. And you can see like that leaflet example earlier, we've got a map object for a scope and then within it are properties and they're referenced up there. And what you'll see from this is that no matter how much logic people try and put into and wrap up for you in directives, you're still going to have to write a lot of stuff yourself for maps. There's just no way around it. So it might be better just to go ahead and do it where it makes sense rather than, oh, well this didn't support my one option that I wanted. Well, now I've got to make a pull request against that library. Is it still going to be maintained and stuff like that. So ngMap, that's more of a declarative one. You've got a center which is an array and then it's got the position. That's not a pointer but an actual string which then it'll send to Google to get geocoded and stuff. Okay, so look at leaflet. Leaflet really only has one good example that I found. If anybody knows of one other than this that they think is worthwhile and useful, let me know. But the stuff I saw was not like great. Anyway, it's also a script style. It was inspired by Angular Google Maps but it's got a much better set of examples and documentation and it's a little more fleshed out. So again, same kind of thing. In fact, this is the one I pointed out from the beginning. You've got the Oslo Center and the markers and they just make a map with a marker there and the map center to that point in time. And this message is a pop-up box. So open layers. Open layers 3 is new enough that it doesn't, there's just one thing and open layers 2 is old enough that there's an abandoned project that I started called azimuth.js which was supposed to be like a nice declarative wrapper for open layers leaflet and Google Maps. And then there's another one which we'll talk about here and they don't have their demo. It's horrible. But they do have a lot of the same, they give you a lot of ability to do stuff in the markup but you have to like go and look through their code to figure out what that is. All right. So that's open layers 3. Is this Angular open layers directive? In fact, this person was pushing updates to this like 10 minutes ago and it's not even published on Bower yet but it is going to be published later this week. And it's, you get that script style. It's the same person that's doing Angular leaflet directive and it looks basically the same when you're coding it and they've got a good set of examples on their GitHub repo and you've got Angular open layers which is, like I said, the declarative style and you can go look but there's not a lot of documentation there. And also you could try and use or fork my azimuth.js library but like I said, I'm not really maintaining it and haven't looked at it in a while. D3, I couldn't find anything that actually wrapped up D3's mapping capabilities. There's a ton of D3 charts out there but I think the thing is with maps, there's just so many options and there's so much stuff you have to write yourself anyway. It's just not been worth it. And then with RGS API, we don't have anything official yet. Patrick and myself have worked on some things and we keep talking about that we're going to get together and really make it full featured Angular one for it but if you want to look at it, those are the URLs to see the stuff that we did at the Dev Summit and that's the state of that. It's like, you know, some examples and you can wrap up a simple map and a legend and some layers in it but it's again not, we're not supporting it yet and we haven't made it official yet. All right. And that's it for me. Any questions? Let's see. I think I really like the script style best. And so, you know, that question is also tightly related to what framework do you like the best but I think with now three well supported examples using the script style and looking at what it took to support just like two different feature requests that some people had for azimuth and what a pain that was because I hadn't thought about it before and I had to redo the way things parsed and worked. Doing with a script style where you're just pointing to the scope and the scope has all the functions and the, you can add stuff to be watched very easily there and it makes the two way binding make a lot more sense rather than trying to put all that two way binding there and the markup and maybe or maybe not supporting the feature that you want. Yes. Yeah, I think it's fitting into an existing framework. If you're, so if you just want to map on a page, I agree. What's the point of using Angular if all you want is to map on the page and then just do it with normal JavaScript. If you want like an embeddable link that's really easy for people to copy and change, then you want something that's more like the declarative style where they just put in the center and maybe a little bit of information and bam, they've got a map and that makes sense too. But then this little more detailed and customizable script style, I think that's really for when you have an overall application and they map as a part of that application and it needs to fit in there well and play with the other Angular parts nicely. So that makes a lot of sense there. But I agree. If you're just want to map on a page, it's a lot of work to, well, not necessarily a lot of work, but it's a lot more than you need. It's just writing in JavaScript. Yes? Yeah, no, that actually does work. I had a really bad experience with trying to use the Internet during a demo yesterday, but we will, we can chance it again if you all want. Actually, I'll show you from azimuth. I've got a simple example. No, I'm not connected here. If you go to the URL for my talk, I follow any of those links, there's a couple of good examples of ones that have the two-way binding where you can move the map around and then it states what's happening or you can click buttons and it changes the layers. And you can zoom in and zoom out. There's examples for both of the Angular directive leaflet and Angular directive open layers where it's got scale-dependent layers and they're set up so when you zoom in and zoom out, it swaps those in and out. Yes? Well, again, with the declarative style, you're kind of, it's up to you to know what all the support in that declaration and what attributes are going to do what. So, like the ng maps, he purposely devised that so the only API you have to know is the Google Maps API. Like you pretty much just write Google Maps configuration or information there in the directive and it parses it and uses it. And then the script style stuff, because you have real control over that and you're kind of, you're directly using the library, you're also not really limited there. So, yes, it depends on your use case. Yes? Maybe I missed it somewhere. The project that I'm working on, I'm using Angular for all my controls, but I have to call a guy on the map with any of these. The web works allow me to do that. So, I'm just using jQuery right now to control those call a guy on the map. Yeah, I thought I had one that had the multiple polygons one that was really interesting that was kind of that same example. It gets it from, it uses the Angular request to get it either a JSON or GeoJSON data source. But this, I apparently don't have it. I took it out because it was too long to fit on the slides, what happened. But anyway, yeah, so it's just a controller and you can require that in like you do other controllers and you can use it and it can go and fetch data or it can read it from other places just like any other thing and then it just puts it up on the map.
|
AngularJS is rapidly gaining popularity and favor in the front-end web development community. Several open-source AngularJS wrappers exist for open and closed source web mapping libraries. This session will survey the landscape of existing mapping library wrappers. Wrappers for OpenLayers, Leaflet, d3, Google Maps, and Esri WebMaps will be examined. Comparisons of the different abilities of these wrappers and the techniques required when using them will be examined. Advantages, strengths, weaknesses, limitations, and "gotchas" will all be examined for the AngularJS interfaces of the different mapping libraries. Attendees should leave the session with an understanding how to best integrate their mapping library of choice within an AngularJS application and how they could help improve these various wrappers.
|
10.5446/31679 (DOI)
|
My name is Christian Wilmes. I'm a PhD student at the University of Cologne. And I did some work on Copenhagen classifications on paleoclimat model simulations, on which I submitted a paper for the academic track here. And I'm very happy to be able to present my talk now to you here. And I structured this talk in five parts. So I will first introduce a bit the project environment in which we conducted this research. Then I will explain what Copenhagen classifications actually are and how they're used. Then I will talk a bit about the paleoclimat models, which I use as the basis for the classifications. And the main part of the talk will be the implementation of how we derived this from the model data. And finally, I will show some maps showing the results. OK. I'm working in a large research project called the Collaborative Research Center 806, which is funded by the German Research Foundation. It has around 100 scientists working in this research center. It's an interdisciplinary research project concerning the culture, environment interaction, and human mobility in the late-quartinary. They are scientists from universities of Aachen, Bonn, and Cologne working in this project. And myself and the group I'm working in are based in Cologne. And you can find lots of information on the URL you find here. It has a DE ending, but the content is in English. So you can understand it all. So the main point here is I'm working in an archaeology, geoscientific paleo-environmental background for which paleo-environmental classification then, of course, are useful databases for further research. And that's why we conducted this. So I wanted now to give a short introduction to Copenhagen classification. The map you see here is such a classification created by Kotek et al. in 2006 from a climate station, available climate station data, which is, meanwhile, worldwide available in good density around the world. And this data is on the basis of a 50-year monthly means for precipitation and temperature, which are the main variables for the Copenhagen classifications. And this is what these maps look like with today's data. This is based on data from 1950 to 2001, I guess. Somewhere written here. 1951 to 2000. And so if you are familiar with plant regime distributions and ecology, the biome maps, which are also very popular, look very similar. So this classification seems to show the same pattern as actual biome maps or ecosystem maps, you can call them, for different ecosystems. The Copenhagen classification has five main classes, which are the tropics, which you see are mostly in red. The temperate, which are greenish. The arid, which are like brownish on this map. And the cold, snowy climates, which are like purpleish. And frost, polar climates, which are blue. So these are the main types. And then there are several subsequently distinguishings based on temperature, precipitation, and the seasonality of both precipitation mostly. And to compute this classifications, the algorithms are more or less based on the use of these 11 variables you can see defined here. These are all based on temperature and climate, of which you can see here. I will further explain them later. So the thing is you have the climate and the temperature and precipitation data, and you compute with grass map algebra these variables for the rusters, for the distribution. And then you use them in if expressions to distinguish the classes. Here you see the criteria built from these variables defined before, from which you then define the several subclasses. And there are lots of it. So I listed them here for completeness. And you can put this into code. And this is actually what I did. And but we applied it to paleoclimate data. Yeah, to derive these maps. The paleoclimate data we acquired from the DICAR-SZ node of the ESGF, a system grid foundation. The DICAR-SZ is the Deutsches Klimarechenzentrum node. There are several nodes. You can see a list on the web page for them, some nodes in the US, of course, and around the world. And this grid also holds the data for the climate models, which are, for example, used for the IPCC reports. And we use actually the same models, which producing the data for the IPCC reports, but modeled for paleotimes. So these models have certain boundary conditions, which they're for forecasting project, like trace house gases and ocean salinity, and further things like orbital parameters of the Earth. And they can run these models also for boundary conditions for the past. For example, if you get these parameters from sediment archives or ice cores, et cetera, you can run them for past times. And we acquired these three models, which is the control run is an actual run from boundary conditions from pre-industrial time. This is then assumed for 180050 or 1800. So before the onset of industrialization, the main onset of it, and the feedback to the Earth system, then we have the metallocene time slices. This is 6,000 years before present. And the LGM, this is the last glacial maximum. You see here some parameters here. They're almost all the same. I took the atmosphere monthly realm data because precipitation and temperature are in the atmosphere realm. And you see the temperature resolution. The control run has a 150-year simulation. And both others have 100-year simulation for each month. So they have this year's times 12, so 12 months a year. You have the number of actual raster layers per model. And then you have the spatial resolution, which is not very fine. So it's 192 by 96 for complete Earth surface. And there's a version number. So I have links on the last page of the slides, which I will upload to the actual data. So you can access the same data I'm talking about here. And oh, yes, here's the model parameters I talked before about. These are the boundary parameters on which they are doing the models. So I listed them here again. So as I already told, they can adjust the models to then pass times. Yeah, and this is a processing tool chain for deriving the classifications. So we have here on the, there it is. So we have here the input net CDF files. You get net CDF files for each variable, which have these 1,200 or 1,800 layers of data for each month, 100 or 150 years. And we have a Python script computing the monthly means for each variable, which then, oops, sorry, results in 12 monthly mean raster layers for the temperature, TIS, and precipitation. And then I applied an interpolation, in which I talk on the next slide, to increase information from the data. This is a crucial point of this approach. And then we did the actual calculation of the classifications of the criteria I presented before, formulated in another Python script. And from this classifications, we did the maps. So this is on the left side. You see the original resolution for the import data on the right side. You see the increased resolution. We resampled it to the centintegree grid, so 0.1 times 0.1 degrees resolution. And we applied a B-linear interpolation. And we think this is legitimate because the import data are model outputs on continuous grid. And if you think about the gradient between one grid cell and the other, this is a continuous slope. So it's not like a curve. It's a continuous slope. And this, why we believe this B-linear interpolation is valid to increase the information in that sense, that the boundaries of the class is resulting from this classification are more finite. So if you have large gradients between two grid cells, the boundary would be different from the original grid cells. And this results in some way in a higher amount of information in spatial resolution. Here are some examples of what the Python script just looked like. You see an example of the calculation of the mean annual precipitation. And one below it is a calculation of the summer winter differentiation for the seasonality. So you see it's a map calc expression from grass, which you can put in using the grass bindings in this bit of Python code. And you see here the precipitation data is given in millimeter per second per square meter. And the classification definitions available are all on just in millimeter expressions. So this is what is given in a flux entry just computed by two daily measures. Yeah. Yeah. So for millimeters per day, not per second. And the other thing is to distinguish between summer and winter to just add the mean temperatures and look which has the higher value. This is a summer then. And this also can distinguish between northern hemisphere and southern hemisphere mostly because some of these expressions shown before are based on the seasonality. So what you see here is a map of the LGM classification. For worldwide, you don't see the high resolution, which it has. Because it's worldwide here. But what you can see is we applied for the LGM in different coastlines based on the minus 120 meters basymetry contour. According to several publications, it is more or less given in the community that during the last national maximum, the sea levels were about 120 meters below from today. And we additionally included data set from Ilast et al, which all have cited in the submitted paper for the ice sheet distributions. So you have here the northern European and the North American ice shields and the ice shields and the Antarctis, for example, as landmass. And you can see, for example, very well here in Southeast Asia, the larger land masses. This has, of course, also effects on northern Europe, for example, Great Britain was part of the European mainland in 21,000 years ago. So this is very useful information. If you do research on culture and environment from the last national maximum to see what the environmental patterns were like in these times. And we have here the map for the mid-hollow scene, which is has the same coastlines like from today. And it's partly different from the control run. If you see the control run, there are some differences in the snowy climates. And in sub-Saharan Africa, you see some differences from the distributions. So that is assumed that in like 8,000 years ago, today Sahara was savanna and it was not yet the desert. It was habitable for humans and animals. So yeah, that was something. Yeah. So I have all the code and the data and everything online. So you can check that out. I've wrote a tutorial which you can find under this URL. And can access the Python files directly. And the data is also published in our project database with DOIs, so the shapefiles and the resulting geotest. And yeah, I'm happy about comments and especially maybe for the interpolation approach, which I think is valid. I didn't turn anything else yet. But maybe there are some people who know more or have better have some something which I didn't concern yet. So I would be happy about feedback. And then sorry, I have to do a little bit advertising in November. We have a data management workshop organized by our group in Cologne. And we are happy to welcome you there if you want to visit beautiful Cologne. And here's some stuff on state-of-the-art data management. You are normally welcomed and invited. Yeah, and I want to thank you for your attention. So sorry for a little bit confusing in the order of the slides. I recognize myself. Thank you very much. I had a question about you asked for comments about the interpolation. Yes. But I actually think I don't fully understand what you, if you go back one slide, about the moment when you do the interpolation. Yeah. Because it seems to me that you interpolate part of the data and then go further with analysis of that interpolate, using the interpolated data. Am I correct? Yeah. So that's just because it wouldn't make sense to interpolate the resulting classification. Because it's not the continuous data, it's ordinary data. So we interpolated the temperature, the monthly temperature, and the monthly precipitation. Yeah, data layers, 24, then proceed with some calculations on the interpolated data sets. And you do the interpolation, for example, on the temperature, just purely based on the roster size and then the values, the linear interpolation of the values. But you don't take into account any other data, I understand. But the Copenhagen classifications are just based on precipitation and temperature. This was the only data Kupmet available some 100 years ago from climate. Yeah, yeah. But Kupmet didn't interpolate roster data. He didn't have roster data. Yeah. And what you do if you interpolate, for example, between two of these big blocks of temperature, you interpolate that there is a smooth difference. Because they are already monthly mean values. Yeah. So I assume that there is a continuous slope. Yeah. Like a straight line. But that might not be true. And you have 20 degrees mean temperature and then here is 20 degrees. But especially if you do that for, especially for temperature and especially precipitation, you know that in real life, the fact that there's, for example, a mountain rate somewhere or a coast would make these differences much less smooth. Right. But we assume that this is already, or we more or less know from the descriptions of the models that this is taken into account for the, for the model data. It's already taken into account. Ah, OK. This is the topography and all the boundary conditions. And because there's monthly mean values like weather fronts and some effects like this. We have great gradients between small scales and spatial resolution are not, are are so smoothed out through the monthly means. OK. So that it's possible to do this. OK. I understand. Here we are. OK. Thank you. Thank you.
|
A pyGRASS implementation of the Kšppen-Geiger climate classification applied to paleoclimate model simulations from the Paleoclimate Model Intercomparison Project III (PMIP III) will be presented. The talk will show the details of how Kšppen-Geiger classifications are practically implemented and applied to climate model simulations using GRASS GIS, the python library pyGRASS and QGIS for the cartography.
|
10.5446/31683 (DOI)
|
Okay, hi. I'm Eric Thysse. Thank you all for coming. This is a lot of people. The work I'm talking about is work that's in progress. I had both a four-hour workshop at the beginning of the week as well as this thing today, and I give them that people paid extra for the workshop. I put a lot of time into getting ready for the workshop. Okay, does this come up or is this... Do I just need to bend over? Okay, I'll bend over. I can lean. I can slouch. Yeah, I was just basically saying a lot of the effort leading up to this conference went into the workshop and not so much for this. So this is kind of incomplete results, but I think you'll find it interesting. I'm also curious if any of you have already done some of this kind of work. So let's talk about... You still can't hear me? Yeah, I can be... Yeah. Right. Let's rock on. Oh, no. Okay. So, yeah, let's rock on. Okay, so why did I start... I'm not going to tell you what the title means until the end. I will tell you that last year, and even continuing, I have a background in experimental film, and I have this ongoing interest in how some of the perceptual tropes that were used by experimental filmmakers in the 70s and 80s can be applied to cartography with different results. So I gave a couple of talks about that last year, and that kind of motivated what I'm going to talk about today, which is a little bit less interesting, but probably a lot more practical, so that's going to be useful. There was a film made in 1970 by a filmmaker named Ernie Gair. It was based in New York, eventually lived in San Francisco for a while, and has now continues to work out of Brooklyn. He's in his 70s now, but he's in sort of a really productive time of his life. And he made this film called Serene Velocity. And Serene Velocity, basically, this was the start of Serene Velocity. It is an institutional hallway at, I think, SUNY Binghamton, maybe. And what the film does over the course of 20 minutes is it zooms in a little bit, and it zooms out a little bit. And it zooms in a little bit, and it zooms out a little bit. And these are fixed zooms. You don't actually see the zoom in. You just see you're here and you're here. Four frames here, four frames there, four frames here, four frames there. A little boring, and then it goes a little bit further. And then it goes a little bit further. It's an experimental film, dude! What do you want? You know? This is what people did in the 60s and 70s. And what's interesting about it, I mean, it's one of those, you know, it's a kind of film where it comes on and people get restless, and like a third of the audience leaves, and then everybody else has a really sublime time. So what's interesting about it is that because of the shallow zooming at the very beginning, you get a sense that you're traveling in this space back and forth. But as the zooms get longer, the space sort of flattens out, and the perspective of this just recedes into kind of an X, and you end up just animating these various parts in the frame. So there's like an ashtray in the hallway, and there's an X. You know, all these things kind of start to bounce around the screen in a way that is not what you associate with depth. This is just a couple of frames. Wow, it's really dark. It kind of shows, I mean, by the end of the film, you're bouncing from one end to the other. And in an interview at the time, Gar was talking about a lot of the same things I just said, you know, that it's this sort of animation and this tension between what's obviously a three-dimensional space and a 2D representation and 2D experience of that space. So I was interested in making a map that was like serene velocity, so I will show you that in a second. I'll also show you this, which is from a different film of the same era called The Flicker, which basically says you might have an epileptic seizure looking at this film. And this film is just black and white frames, and it's really kind of painful to watch, although it also has its glory moments. But I never really thought of serene velocity that way, and when I showed an excerpt of it at NACIS, somebody in the audience came up to me and said, oh, that gave me a headache, man, you shouldn't have done that, you should never show that again. So if you have motion sickness or something, don't look at what I'm going to do next. So this is open street map data based in Washington, D.C. This is not a 20-minute map, this is a much abbreviated version. But anyway, it kind of uses the same approach, so it's just using leaflet in a very standard way with tiles. And eventually the zooms start getting more severe, and you get the idea. It was the first one of these things I did. I've done a series of either, you know, tributes to films or using approaches by these experimental filmmakers to do something cartographic. And, you know, as is often the case, doing this, I was in a hurry, and I spent a fair amount of time tiling this, I mean, styling it, and then I wanted to generate my tiles, so I just set up Time Mill and ran it. Because the zooms here go from zoom level 12 to 21, basically the rendering was overnight. I would turn it on before I went to bed because it would run for like three or four, maybe six hours, I don't really know what it was. And I don't, yeah, so I would do that in the morning, hopefully it would work. Usually it did work. Tom does a great product, it does work. And I would have my tiles. But in the course of doing this, it's almost over. It is over. In the course of doing this, I realized what a waste it was for my project, and also probably for other projects too. I mean, if you just, you know, blindly go ahead and render tiles this way, you know, what you'll find as you get into these zoom levels is that you'll go from having, you know, 20 tiles to 48. I mean, this is our pyramid structure. And then we get to the last thing, you know, the thing will automatically render like three million tiles. I'm sorry, 2,500, 2.5 million tiles at the highest level, which you don't need for a project like this. And in fact, yeah, so I mean, the idea is, you know, you're wasting time, you're wasting disk space, you're wasting machine cycles, and you really don't need all that sort of stuff. You could do it by hand, but what I was interested in finding out is how to do this as an option within TileMilt to just render the tiles you need to represent something within a viewport. So if you look at your developer tools, you know, you'll see that, I mean, here I'm using 15 tiles to fill the viewport. And I could imagine maybe I need 20. So like the whole thing I really need for this project would just be 20, 200 tiles, maybe 27 megs worth of, worth of disk space. And as I thought about this more, it's like, well, this actually makes more sense for projects besides just mine. I mean, in a lot of cases, you know, I mean, you look at it from two different ways. If you're Mapbox or if you're Google, you know, and you need to provide generic tiles for the whole world for all levels, of course, you don't care about these issues because you need to be able to deliver a zoom level at any particular place on the globe. But for someone who's just saying like, my party is going to be here at my house or is where my office is, or this is how you get to our store or the concert's going to be at this venue, you know, you don't really need to render your entire city or the entire state or the entire continent or whatever, you really need something that at a high level shows context. And then as you zoom in, it focuses on the place you want. So it seems like a very, you know, standard use case. And if you're styling your own tiles, delivering your own tiles, and as we've moved into playing by MapViews, you really don't want people to drift off from the story you're trying to tell in your map, right? You don't want, like, somebody looks to see where the party is, and they're like, where's my friend's house in relationship to that, and how would I get to the grocery store? Because, you know, in a pricing by MapView model, you're paying to store those tiles that don't really tell your story, and you're paying to deliver those tiles that don't tell your story, and somebody might even hijack your tiles and use them for something else. So does this scenario make sense to people? Do people have people other than this? Has anybody else thought about this, and has anybody else done anything about this? This is the question. That's such good news. Okay, because sometimes, you know, you search, and you're like, well, maybe I'm using the wrong words, or maybe I'm not talking to the right people, and this is like everybody knows how to do this except me. Okay, so pyramids. Yeah, I mean, you know, it comes down to pyramids, tile pyramids, and this is what we usually think of when we think of tile pyramids, which is at least the representation you usually see, and even this one is a little bit fudged, but, you know, you start off with the whole world on one tile, and then you go to four tiles to represent the whole world, and then 16, you keep droopling as you go. I mean, this one, I'm not sure which level this one is, but it's, I guess it is the next one. And even here, the whoever's done this representation, these folks have shrunk these tiles somewhat. But it's kind of interesting to hold the tile size constant and go down and just see how enormous that gets for a few more levels. And this is not really the situation. This is the situation I'm trying to avoid, right? In some sense, I'm doing the opposite of this. I'm starting with something high, and I actually am trying to focus down to a very specific place by the time I get to a higher zoom level. So drawings that I did not get done for today, but that I'm working on just to help me think about this, are pyramids where you would hold the geographic area constant. So again, think of the place on the ground that you're trying to represent, and then show the tiles going up from that. It's no longer a pyramid, it's tiles that shrink to become almost, you know, too small to see. And then the thing that's really my issue, which is to hold the viewpoint constant, which is where the geographic area expands as the zoom level gets higher. So that's what I'm heading for. Yeah. Okay. So I'm going to switch subjects a little bit. I was in a thrift store. I needed to buy a wedding present. So it was not needed by, yeah, I think I needed to buy a present for something. It wasn't a wedding present. Actually, I would do better than this if I were a wedding present. But in a thrift store, I found this book and I thought, well, this is unusual. I like encyclopedic, but you know, like a book called Salt is fascinating to me. It's just one subject, and let's just learn everything there is to know about it. And lo and behold, in this book, I found this obscure marble game called Prince Henry, where you actually built this tower that's kind of cut off at an end. And like whatever kids played this in the 70s, they tried to bounce a marble in there. But this is actually what I'm trying to build. It's one of these things except upside down. So instead of fighting the baggage that comes with tile pyramid, it doesn't do exactly what I want. I decided I would reference this marble game as the name of this approach I was going to take. So that's the Tower of Prince Henry. Now you all know. And the patches I'm working on are twofold. For this sort of thing, you need two pieces. You need something on the back end to generate the tiles. You need not over-generate the tiles. So the first patch that is not done but will be released when it is done is simply just going to put a little box in here. There. Yeah, I'm not used to looking at so large. Which basically says that you want this Tower of Prince Henry kind of thing. So for a specific zoom level in center and bounds, it will scale up to what you need as you would zoom out and come to a point or to a square or to a rectangle at the bottom as you zoom in and not generate anything outside of that. What's interesting is that the MB tiles format does handle that sort of thing. You don't have to have the same number of tiles at each zoom level for it to work. Of course, you could just store tiles as loose image tiles. And then the other patch you need is on the front end or the consumer of this stuff. And the way to do this with leaflet at least would be to work with set max bounds. So for each zoom level, you would have set max bounds based on the parameters you used for your Tower. And so the mapping library would not allow you to stray beyond those bounds, for any particular zoom level. I have not looked at open layers for a while and I'm excited about open layers 3 actually. It's one of the things I'm taking home and going to look at quite hard once I get back to San Francisco. And I'm sure there's similar commands in open layer that allow you to do the same thing. Restrict yourself to a geographic region per zoom level and would keep you on track with these tiles. And then the last thing I'm going to say, it's kind of accelerated talk, is it would be interesting. I don't know how to do this. This seems harder. But they actually have like a surface, a zoom level surface. I don't know how else to do that. Here's another word. What should I call this? Because the thing I showed you really deals with one destination, you know, a neighborhood or a retail, you know, whatever. The thing you want people to see. But I've worked on projects where people want to show, like, here are my stores across the nation. And they have no stores anywhere from the Rocky Mountains to the Mississippi River. So you wouldn't really want people to be able to zoom in very far there, although you would like to show some context. We'd like to force them to kind of come up to a continental level and then drop down again as they get towards Quad Cities or Chicago or whatever those stores would happen to be. So instead of having a simple set of max bounds, which you couldn't go beyond, if you traced over the surface of a geographic region, it would take you up or down. And also, you know, I mean, this is really about conserving tiles and conserving space. Peter in my workshop is, I think I saw Peter here. Yeah, I was talking about some of the work he does in the state of Washington where, I mean, it's true of any state. Like, you know, for a city, you need to be able to zoom in and see things that are there. But if you're out in a very unpopulated area with very few roads, unless your needs are agricultural, a parcel-based, often a general public consumer doesn't need that zoom level. So you don't need to provide those tiles. They don't need to even go looking for them. And you can conserve all sorts of resources by doing that. So that's where this project is going. And I will have some drinks at the Gala tonight and go home on Sunday and get to work on this. And you can keep an eye on my GitHub stuff for these two little libraries. It'll work. One will be a forked version of Tom Mill. You know, there will be a first library for Lee Fluton and a plug-in for OpenLayers as well. So if there's any questions or anything, I'm happy to. This is my film ending there. Thank you for coming. Oh, it was interesting. Yeah. Oh, so you get this now, yes. Is there a reason why you haven't moved towards using tile stash or something else that will generate only the tiles that are actually needed and then cash those for future use? That would be one, that's the way that in our work we've gotten around having to generate massive sets of tiles for some very rural areas. But there's still the possibility somebody might go to that area if need be. Yeah, if somebody does navigate to it, it would generate the tiles. But unless somebody does, then they're not going to come into play. Right. Yeah, I just think this is more of a brute force. Like, I'm just going to fence you out of that stuff. I don't know, that's not my usual disposition, but that's, you know, it just seemed like, because I know I've done that. You get distracted or the maps, especially if it's a pretty map. If you've done a really nice style, somebody's going to go meandering around to see what other things look like. I don't know, it just seems to me if you're trying to get people to a particular place, you should have the option to say, like, that's all I'm going to show you. Here's where I am in the city, boom, boom, boom, this neighborhood, this is the block I'm on. You can do a region, but, you know, it still kind of focuses you there and doesn't let you meander and preserves, you know, conserves resources. So, yes, that approach is valid, but it's not, I've kind of put on a different hat for this one. I was just wondering also kind of on the same lines, the fencing out, have you tried that fencing out or boxing the user in on the client side? So, limiting, I don't know what you're using for your client side, like, take leaflet, for example, and you could say, okay, when the user zooms in, they're only allowed to be within a certain bounding level. Well, that's the second part of this. That is not a matter of doing that per zoom level based on the information you specify. So, that is part of it, but I'm also interested in not even generating the tiles. And I don't really want to have an exp... You know, I don't... Why do I... You know, especially because what I was doing, like, the zoom levels are crazy, and I did get a lot of tiles. And I really just would rather generate, you know, a few hundred tiles instead of millions and billions. So, I'm interested in solving the back end issue and the front end issue as well. But yeah, you're right. I mean, we're sort of at the same place on the front end. Anybody else? Okay, great. Well, thanks so much for coming. It's great to see you.
|
Programs that generate map tiles default to generating tiles for abounding box whose dimensions are fixed up and down the zoom stack. Butthe overarchingly common use case calls this default behavior intoquestion. If the ultimate goal of a map is to lock down the display of afeature at a high zoom level, then any tile outside of the invertedpyramid whose truncated top bounds the feature at the desired zoom levelis extraneous, unnecessary.Inspired by a game of marbles that uses a similar shape in its playing,I call this truncated, inverted pyramid the "Tower of Prince Henry,Reversed"[1], and abbreviate it TOPHR.This presentation describes modifications to TileMill, the same strategyimplemented directly through Mapnik XML, the use of the flexible mbtileformat to store the generated tiles, and presents several measures ofthe resulting savings (tile generation time, number of tiles, diskspace). I'll also describe a plug in for Leaflet and an approach forOpenLayers that ensures that map users cannot stray outside the boundsof TOPHR.1. It's also reminiscent of the name of a real album by The Fall or an unreal tarot card.
|
10.5446/31686 (DOI)
|
Yes, my name is Jacob Lenzthorpe. I work at the Danish Geological Survey and Exploration Group. It's called GGE. It's not the official Danish Geological Survey. It's a private company. So I'm going to target non-advanced Python users or QGIS users. So we won't go into any hardcore here. I want to get everybody to grab a few of the stuff. So using Python and QGIS without writing a plug-in, there's several ways we can do that. We are going to focus on the last three ones down here with a Python init function, the expression engine, and how to use Python in actions. I'm going to quickly run through the four on the top. We have the console and editor. The console is in the plug-in menu. There's a script runner plug-in that's useful if you have several scripts you've made and you want to run them and use them as an archive. You can run a piece of Python code when events happens on your project, like open, close, and save. Or here's a Samuilism's sort of code where we need the river layer to be below the lake layer. And you can, like, on the save project event, run your code to check if that's true. New, commerce, QGIS, you'll probably think, well, it's a line layer. It has to be on top of the polygon layer, but not in this case. OK, and then the processing framework is really huge. And you can do a lot of stuff in there. We won't deal with that today. So the first problem, I ran into, we need to create oil tanks that had to be rectangular and that had to be dimensioned to a given tank volume. Like a sample, it's like a 15,000 liter oil tank. Has dimension of 1.75 meters to 6.5 meters in the length. So it could be quite difficult to draw that you had to measure, perhaps use plug-ins to be able to draw the 90 degrees lines. So I come up with a solution of creating a custom UI and use some Python code. So when you updated your tank, when you draw your oil tank before it's saved, it's changed to the correct dimension. There's a UI and a code I'm going to show you. Here's a code. You can read it. Now I just put it there for you to get when you get the slides. So let's take a look. So let's try the old oil tank. You see, it could be quite difficult for me to get the right angles and dimensions. So I came up with this one. OK, I draw something. It doesn't matter what it is. And I finish drawing. And up pops this custom form. I can pick the size of the oil tank. I can call it a T1. Now watch this geometry. Take a larger one. Let's call it T2. OK. Then you can move and rotate them. So the place to go. Then you can move and rotate them, so the place to correct place. So how does this work? Up in the properties, the field tab, I can provide my own UI file. It's a path here. I did the UI in a designer called QT Designer. You can drag the different controls in and set up the combo box with the dimensions. And then there's this piece of code up here. This is a text file with my Python code in. And form open is a method in that file. And that's all it takes. Take a quick look at the file here. This is a method form open defined. The feature is whatever I draw coming in. And I overwrite the OK button event to my own event. And then grab from the feature, the geometry, this bounding box in the center. That's the point. I get the width and length of the oil tank from a dictionary down here. And then create a new geometry and set the geometry of the feature. And then it's changed. Pretty simple and easy. There's a blog post from Nathan down there that will tell you in depth about how this stuff is working. OK, the expression engine or the expression string builder is something I found many places in QQiz. There's the toolbar and rule-based styles. You can use expressions, labels, the field calculator, pin composer labels. It's a great tool. So what can we do? Problem. Yeah, my colleagues, I made the templates for our maps. And I put my initials on. And they're supposed to change them to their own. Some of them are just, well, at least my initials was out on a lot of maps I didn't have anything to do with. So I thought about adding a new function to the expression engine to be used by a label in the print composer. You all know the expression engine, I guess. So I added a phospho-g group. And I'm now able to get the person who's logged in on the computer. And it's quite easy. I have my code in a user function Py. This is a code. It's just one line of the pattern code, getUser. And it's returned. And I can use that in the print composer. Take a look at that. It's this one. You see, getLugin. And it's just by putting in here. Now, another problem. I need to get the temperature for a series of points. Now, how do we convert an xy point to a temperature? Well, we go out looking for a REST API. We have the open weather. I believe, actually, it's open weather, map.org. Yes, it is. We have the request at the bottom, the link. And it just states a latitude and a longitude. Now, when I run that expression here, that request in a browser, it will return the JSON for me. And with red, you can see there's a temperature. And it's in Kelvin. I can also get the pressure, humidity, and the distance to the nearest weather station to the point. Then in Python, I can drag out the temperature by the query down at the bottom. Let's try and see it. Temperature. Have a quick look inside. That's already a column made ready for that. Call temp. I will make it edible. I'll enter the field calculator. I want to update the temperature with each point. I go in my phospho G, get temperature. Now, there's a lot of help text here. And it does take some parameter. It's not valid yet. So I need to tell it what kind of, if I wanted to Fahrenheit, Celsius or Kelvin. And I just did that with a ticket F. You see, it says 51.8. It just does it on the fly. If I put in a C, it returns 12. So let's try getting the degree in Fahrenheit for the points. F. OK, of course, I need to tell it which column to end into. Now, we'll call the own weathermap.org for each record and updates with the temperature. That's very useful. A lot of data will, on how exposed through a REST API, address data and your weather logical data, all sorts of data. Let's just start looking. Now, actions, problem, we need to access a Google Street View for a given direction. Solution, well, put some Python code to get this Google Street View. Let's take a look at that. I have a points table here. I go to the actions tab. I have one here called Street View Looking West. There's some Python code. As you see here is the URL for the Street View. Y and X is added when I click on the point. And then, well, the data is collected and loaded and shown in an image. Not a lot of Python code in here. So what does this look like? Go up to my actions, Street View Looking West, push. I can put it on. Well, Looking East. So I thought about how about building a Street View plug-in. But then, let's look at the plugins. And right here, a guy already did make a plug-in. And that's, of course, a lot better in my action. Well, now my action is good. It's not working. The other plug-in. Yeah, just to recap. We saw the Python init function where the tank generator was used. We saw the expression engine or expression string builder, where we were able to add our own methods and a new group to the expressions. We've got the username. We've got the temperature from the service. And we saw QGIS actions from Street View. That's also a service. So I like the idea that you, in this quite easy way, can access this REST APIs. Yeah. I saw your scripts. They're pretty well documented. Could you go over how you do that? So that, like, for example, the temperature thing, you've got the little help there to explain the function. It's a XTML syntax that's shown up from here. So if we take the expressions, point it out. So all this is, say, it's TML written in the method. Yeah. Can I just comment on the function? Yeah. You showed the example when you opened a document. You could bind to two or three events. The open document event, closed document event. And then you could fire whatever scripts from there. Is there other types of events that you can bind to as well? So if I zoomed into a specific area or after an edit feature, that it will automatically trigger something. If I want to do quality control and something similar to the tank, I want to run a process every time a feature has been added. Yeah. You can do that. Yeah. Oh. It's on. Oh, sorry. I was just asking if there's other events you can tie into in QGIS, other than the open document, closed document. So you could basically have a whole bunch of things to do different, basically, processes. Yeah. You can also throw in validation code. And you need to enter this and this. And you can also put your validation out in the front in the UI out here. Like if you need to, a tank number has to be between a certain value, you can use a control that spin it up and down within an interval. So you can do a lot of the validating out here also. But also in Python, you can validate the input. So would you kind of think we create your own plug-in to have in? Or do you tie into out of the injection ways of getting Python in? Which way would you recommend? I like the expression where you extend the expression engine. I like that a lot. But I mean, it kind of depends on your needs. Any more? For this code? Yeah. Or in clicks? No. But I'll let the stuff to the slides. So if you get the slides, you'll get the code. And where do we find the slides? Well, a big secret. I'm not sure if Swarovski will compile it up or put it out on my home page. You email me. Thanks. Yeah. Thanks.
|
This presentation will enlighten the novice Python QGIS user with different ways of running Python code in QGIS without the need of building a QGIS Python plugin. Any QGIS user could start writing small Python scripts for automating, customizing and extending QGIS, making their daily workflow an easier and more fun task to complete.-Python through the QGIS Python Console and Script editor: This would be the most obvious place for PyQGIS newcomers. The console and editor comes with syntax highlighting, autocomplete and easy integration to QGIS.-Scriptrunner: A handy plugin for running Python scripts when objects needs to be instantiated.-Extending the Expression engine: Using a startup.py in your .qgis2 Python folder with a @qgsfunction(0, "Python") attribute. An example is shown adding the name of the current user, to a label on a Print Composer composition.-Run script on project event open, close and save: You may want to validate if a certain table is open, and notify the user if it is not, when opening or saving a QGIS project.-Python in QGIS Actions: Extend your QGIS actions with Python code. -Python Init function: A powerful feature in QGIS when creating new features. You can validate and programmatically edit your attribute input. One can also process the newly digitalized geometry. An example is shown creating rectangular oil tanks with predefined dimension.-Processing Framework: Scripting methods from the Processing Framework. Useful when you need to loop or run a batch of commands from the Processing Framework.
|
10.5446/31688 (DOI)
|
So good morning everybody, I'm happy to see you here for the GRAS G7 talk called your reliable number cruncher. The idea is to showcase what has happened in the past almost six years since we start the GRAS 7 development initially very carefully but then with increasing speed. So I am having a lot of information here but consider it as a kind of lightning talk where we try to get through a massive amount of news. So I regularly receive that people have some idea what GRAS GIS might be or could have been something like this or these are screenshots from the 80s which also indicates that the software exists for now 30 years which is something pretty exciting. It's probably one of the oldest software packages out there in the open source geospatial arena but it is continuously developed and this is the big difference to some other packages. So what has been happening is of course some evolution and you can already guess that we got a new user interface, completely new user interface, many new functions features which I will walk you through now. Just to something harmless what you find also elsewhere, histogram tools, these are more slides for those in the room who have already been using GRAS in the past maybe the version GRAS 456. I got to know even a GRAS 2 user in the past days maybe also here. So nothing special you want to draw your map, your profile and of course you can do so. What's already interesting that you can do this also with massive amount of data. So we are testing for example showing 48 billion pixels that is the current 25 meter elevation model of Europe. You can do so and not all software packages are able to do that. You can also combine legend and histogram you see over there the distribution in the legend something which is already a bit more fancy and pretty useful especially if you are in this area here where we have very homogeneous values and then only some differences in part of the area. Okay then you can do the obvious thing draw grid lines, ramped lines, geodetic lines, whatever something which reminds you that geodesic support is also not found in all those GIS out there but it requires some special computation. Something new still probably a bit in development let's say not all nice features are which we want to see there but something pretty promising in your geospatial modeler. This enables you to create a graphical workflow and as a bonus you can write it out as a Python script. So at this point you can graphically combine your steps into a workflow and then eventually turn this into a script maybe to further modifications in future but especially run this as a batch job. So it is easy to go from a graphical representation to something automatic. A few highlights in terms of vector data processing. Gras GIS is a topological GIS. It has always been a topological one and there's really no plan to change these two simple features because we think to maintain quality you need really the topological control and the possibility to apply topological truly a topological tools to your data. Along with that we have a digitizer topological digitizer of course this enables you to get directly control over your newly created data. You can see if they are topologically correct something if you are not familiar with that just to give you an idea. If you have adjacent areas the shared boundary is one boundary and not two boundaries as in simple features. This makes a difference because you know if you are not precisely digitizing and you will be surprised there are still people digitizing nowadays this is something which is still relevant of course. Once you have one shared boundary you cannot have gap sense livers naturally and this is something which renders the idea interesting. You see from the feature overview there that we also have support for back drop maps so that you can have an underlying map from which you want to digitize or you can even copy features from that if it is a vector map and other things. So especially interesting about the topological back end in Gras GIS is that it became much faster. This is something well you usually have more quality at the expense of more computation and this holds true for vector topology as well. You have to do more computations because you have to check if it is correct or not but you can see here comparison Gras 6 and Gras 7. The Gras 7 line is the green line which is almost horizontal of course it is not perfectly horizontal again that's impossible but you see the increase of seconds to do some computation is really dramatically lower than it was for Gras 6 and while officially Gras 6 is the stable view you can already get of course the Gras 7 snapshots at time and we have been making tests for huge vector map with huge vector maps to understand where optimization can still be done. Another topic is vector network analysis. There's a rich tool set available. It now comes also with a graphical interface you can see from the GUI you can easily select your various algorithms you want to run shortest path the classical ones are there but there are also bridges, visibility network or centrality and other tools available which are probably a bit different or giving you some extra features. So what is vector network analysis for if you are not familiar with that you take a graph which could be a street network and then you want to move on top of this network. This is a classical example here. Travelling salesman problem you have several points to visit on your road network and you want to understand what is the optimal path between these different points which is not necessarily the shortest distance but you could also use attributes and say okay traffic plays a role you can make it dynamic and fetch dynamically traffic data from some database and then run this thing on top of it. So you would even get different graphs in function of the time of the day. Switching to raster data support for massive raster data this is something which have been working on in the past years in order to be able to be fast with also huge data sets. As you know data are growing like crazy and hardware probably less so so at this point we need optimization of software. The question is what means massive and massive is something which is not so easy to define. Of course the size of the data set plays a major role but it is also related to hardware resources, software capabilities and operating system capabilities. Grass is a multi it's a portable system which runs on multiple operating systems Windows, Macintosh, Linux, AIX whatever no various systems and at this point it really depends on the operating system you are using. Limiting factors as we have already heard yesterday in a talk, RAM is not that costly anymore but still way more costly than disk space and so in many commands in grass you find the opportunity to switch from a RAM based memory model to a swap based disk based memory model in case that your memory is not sufficient you are able to outsource kind of the computation onto your disk. This takes more time naturally but still you can do it even on limited systems like a laptop for example. Largest supported file size is also an issue and we have been working on implementing even for the vector data the possibility to exceed on 32-bit systems the barrier of a few gigabytes. Also here a nice curve to show this is a cost surface calculation and you see from a nonlinear increase in computational time it has been changed to a linear one which makes quite some difference of course here million of points and the seconds to compute the thing. I have here as a standard example oh sorry to see this laptop here you can perform a PCA on this that is a principal component analysis with 30 million points which you'll get from a satellite data for example in only six seconds and this is something which you cannot easily do in some other statistical software packages. So what else do we have? We have a lot of new tools for hydrology. You see here a kind of flow chart stream tools channel order segmentation basins what else is there yeah whatever you can read yourself there are some scientific publications available and especially in the Grass Wiki you can find data pages dedicated to that in order to see how to compute things and again we have been testing the watershed or hydrological tools with enormous amount of data and at this point we are always happy to receive comments my file is still bigger and it can't be done and then we can see if we can do something about it but it starts to become difficult to find such huge data. What else you can do programming you can perform own development of course as before it is open source but there's some help for this and something completely new is the Python API so we have no Python support integrated and this is something which already gained interest in the mailing list we see new users popping up which we have never seen before and they say okay I'm doing Python programming and Python programming is fairly easy and the Grass API gives you now the possibility to connect not only to the commands themselves but you can also connect to the underlying functionality even at low level means at the C library level and just mix everything together as you need it again we have the program as manual which is documenting something and we have a new PyGras called API integrated in the normal manual where you can find classes to do topological operations and so on. You can also use Grass as a batch system you don't even have to start it for using it but you can just put some Python lines together and then do a complete batch processing starting from your shape file or your GOT file or whatever you have to retrieve something out of that and then put it somewhere it's into a different system or online with WPS whatever you need. Publication Open Access is describing the architecture and I guess the slides will become available so I will make mine available for sure so you can go through and check that. Program as manual already mentioned you have an iSearch engine there and the full functionality documented. Okay this leads since we speak about programming that you can use Grass as a backbone on the consumer side this has already been done for a while there's the integration in QGIS through the programming the former sextante and this allows us to let's say be a QGIS user but still use the Grass functionality if QGIS doesn't provide what you need and it works like this that you have in the main menu the processing tool set here there are also other tools registered like Saga or Google tools and so forth and somewhere in this tree if you have Grass installed you also find your Grass commands and if you want to calculate for example a watershed you load the elevation model geotiff whatever into QGIS go there run watershed and you'll get your watershed out and it will be shown again in the normal QGIS interface which means that internally Grass is called everything is calculated and then the result returned as geotiff or shapefile this is another example the dissolving algorithm based on an attribute so here we dissolve this map to something else vector operation on a shapefile and you'll get back a shapefile at this point. So even more complex integration again with processing you can load just a random workflow to illustrate what's possible you can fetch your data from a post GS database or from WFS web service or WMS whatever into QGIS you go to processing do your operation and get back your result and this is something which is really coming out nicely in the Vienna code sprint we have been updating this processing set to grass seven so you have both in parallel now grass six support and grass seven support furthermore we have our integration this is something which exists till 2000 but just to remind you that this is possible there are spatial classes in R available and additionally there's SP grass six it is still called six but it runs equally with grass seven and like this you can go into your grass sorry into your R session and fetch data from the grass database and do your statistical analysis on top of it this is something which we use a lot in our research and eventually the WPS support web processing service we have integration in I forgot to put 50 to north logo there into zoo pi wps and 50 to north like this you have the possibility to create on web processing services using the almost 400 commands which are available in grass and what's particularly interesting you can see here this XML style documentation that is the self expressed and say so to say explanation of each grass command so you take a grass command and you query it what is your WPS process description and it will return this description and like this you don't have to manually register all the various parameters of a grass command but it is just passed from this file so and extra bonus is here that you if you write your own scripts python shell whatever you prefer using the grass parser even your own script will do the same thing again because this is generated by the grass parser itself so this is applicable to all commands so quick walkthrough image processing we have a new geocoding tool here an example you have some unregistered historical map there I loaded some open street map and this enables me to find the different ground control points corresponding once it tells you what the error is you can do error minimization and then run a variety of transform algorithm like a polynomic or lance gos and others are available there for image processing of multiple channels we have the possibility to do scatter plots now you can of course zoom into them you can look into your feature space there you can also perform classification supervised classification is was already always there but we have a new interface to that means you load your your for example rgb composite or some false color composite you can digitize inside you get the spectral response out there and then you can use that to train your classification model furthermore we have unsupervised classification this is also new this allows you to do a segmentation based classification you can see here using different thresholds you can decide on which level you want to segment the thing and from an auto photo to something like that you can use idle segment for that so to say something about the bigger data we have the possibility to run grass on supercomputers clusters and so forth as mentioned already we have the possibility to compile grass on more or less any operating system and these beats here come sometimes which with unusual for me unusual operating systems but it doesn't matter and we are able to not only run grass on a single core but you can make use of the job system the queue system which is commonly used on supercomputers because it is a shared resource or if you want to do cloud computing you can fire up your virtual machines and then just run your stuff remotely in our in my research foundation we have been using grid engine for that like this are you write a small script to launch the different jobs for example satellite data reconstruction or something like that and the job engine will then distribute over all these nodes there the jobs to compute them in parallel that's not much effort really in the grass wiki you can find documentation and eventually some mentioning of the new possibility for temporal support we have spacetime cubes now available in grass GIS so new spacetime functionality here and you can see there's already a rich set of commands in the first place you define your container like space spacetime cube which means you say okay my data set starts in this year and ends in that year and semantically we have monthly observations or hourly or daily or whatever and then you put in in a simple list the names of the files there it can be raster vector or a volume volumetric raster and along with the timestamp and then it will automatically register everything properly and you can also do gap filling for example recalculate missing maps and so forth again also here scientific publication available and wiki page just to give you an idea you have visualization tools the timeline tool which shows you in the first line for example point point in time data which are for example meteorological observations which you get regularly from a station and the other two are continuous or not continuous information sorry data covering a period for example a month or a year or whatever it is and this you can all define and then since it is registered in this database you can then do aggregation in an easy way which means if you want to calculate the annual summer temperature you just define your summer and say okay give me the average and that's it that's one command and likewise you can do a climate change analysis take 30 years of data and then do the aggregation so here we have our computation here we have some example this is a plot of a single pixel in a stack of data and you'll get out in this case chlorophyll content of our time in one position or you can also fly through our data this is a timeline of modus lst data land surface temperature which we have been doing so this leads us to a visualization this is more or less the last topic I want to show here new animation tools are also available and as you can already guess from here you have it not only in 2d what we have seen before but also in 3d which means you can define your view this is an example for a lidar data lidar data time series a moving dune which we have been showing two days ago in the in the workshop space time your workshop here done you find the material online and this enables you to get really an idea what's going on in your area when you have time series available for the 2d case we have additionally a new swiping tool this is interesting for disaster management before after this is the tsunami in japan as an example you see how the flooding zone changed and you have the swiper and you can just check in a detailed mode zooming into what has happened in that area the visualization tool also enables you to show volumes volumes are in this case created by small voxels that are volumetric pixels and to look into a volume you need to use transparency for example or ISO surfaces or profiles and then you can put your profiles into your volume like being seen over there or you can use transparency to get an idea of what's going on there and eventually the possibility since we combine big data and visualization there's a nice nice kind of theater at North Carolina State University and you can just project your huge data set there onto something with this combination of different video projectors and this enables you to really get a better idea on huge data something especially tricky is this dune area coastal zone here because it's extremely long but not very high at this point it is not easy to look into in a monitor but if you are lucky to have something around the corner then you can do visualization like that. So to summarize we propose GRAS as before as a platform for sustainable open science but of course also for consultancy or whatever you are doing the keywords are here reproduced ability we can reproduce things because we have the source code available this applies to any open source software and for science this is for us the natural habitat because otherwise in a black box research style we cannot figure out what has really happened or what other authors have been doing. Return of investment is another keyword so as an example we have commands which have been developed long time ago and they are still available but if you now on top of that develop for example some procedures you can you are still able to do to use those even 10 or 20 years later this is something exciting maybe not for the younger people here but those who are doing GIS stuff for a long time they know you remember I have done something something like 10 years ago let's grab the script and see what happens and we are pretty sure that with minor modifications you can even run them in GRAS 7 and we have documented the changes from the previous version if parameter has been renamed to something more reasonable you find it in a look up table. Documentation you have history preserved each map set which is the workspace, preserves its history forever. Reliability we think that the new testing and quality control system which I haven't shown here but you can find it online is something which is the way to go to figure out if everything continues to work and longevity for open science the code integrated in GRAS GIS survives even longer means if you have a contribution to do please contribute because we try to add that we have also GRAS add-ons which is fairly interesting and this gets a kind of gratis maintenance in future because we say okay if it is on the GRAS infrastructure and we have some changes internally we just propagate it through all the code so that is something which may interest you. Okay I'm done with my time we had to get the stuff GRAS website we have wiki mailing list what else documentation free sample data which also used in the examples this enables you to play around with the software if you didn't do so and at this point I would like to conclude thank you very much and see you.
|
GRASS GIS (Geographic Resources Analysis Support System) looks back to the longest development history in the FOSS4G community. Having been available for 30 years, a lot of innovation has been put into the new GRASS GIS 7 release. After six years of development it offers a lot of new functionality, e.g. enhanced vector network analysis, voxel processing, a completely new engine for massive time series management, an animation tool for raster and vector map time series, a new graphic image classification tool, a "map swiper" for interactive maps comparison, and major improvements for massive data analysis (see also http://grass.osgeo.org/grass7/). The development was driven by the rapidly increasing demand for robust and modern free analysis tools, especially in terms of massive spatial data processing and processing on high-performance computing systems. With respect to GRASS GIS 6.4 more than 10,000 source code changes have since been made.GRASS GIS 7 provides a new powerful Python interface that allows users to easily create new applications that are powerful and efficient. The topological vector library has been improved in terms of accuracy, processing speed, and support for large files. Furthermore, projections of planets other than Earth are now supported as well. Many modules have been significantly optimized in terms of speed even by orders of magnitude. The presentation will showcase the new features along with real-world examples and the integration with QGIS, gvSIG CE, R statistics, and the ZOO WPS engine.
|
10.5446/31690 (DOI)
|
Possibly somewhat smaller than grass GIS and somewhat more unknown here at the phosphor G community and that's why I'm here to present it to you to get a bit more visibility on our project. Okay so I'm talking going to talk about Ilwis integrated land and water information system is something that we develop at our Institute. We are the ITC. Formerly we were an independent Institute now we are part of the faculty a faculty of the University of Twente in the Netherlands. Pretty relatively small organization and well we're doing a lot of things in developing countries like research teaching and projects and as part of these we are also developing our own software and we do that as part of the 52 North initiative. So 52 North is a collaborative platform where research and innovation is combined with software development and the objectives is then to advance this development and to get working software. 52 North has this name because we as ITC and also University of Munster and 52 North office is at approximately 52 North degrees latitude. So 52 North has different communities Ilwis is one of them so there are nine communities in total and this is a very good way to have these communities together so that we also have cross development as we do in our software development. Ilwis is relatively small community but it depends a bit how you look at it if it is you count the number of code lines then we are far more the biggest one of these other communities. So as I said the name stands for the land in integrated land and water information system. We also have quite rich history so we started already in 85 with a small project in Indonesia for doing land use zoning and watershed management. Over the years we have been gradually extending the software and I will tell you a bit of history throughout the presentation as well. So the key features are in fact raster operations and image processing but we also can handle vector operations quite well. In that sense I think there's quite some similarity with this cross GIS also. We concentrate also on map statistics and the projection issues coordinate systems are very well developed within Ilwis. So visualization is also a strength here and the ease of use and that's why the software has been used a lot in countries where we are active typically in Africa, Asia, South America and there we have a pretty large user base. So a bit of history so we had yeah quite some time of development as a proprietary software. We also had a short marriage with PCI Canada but soon after that we went into shareware and finally in 2007 we went open source on a GPL and in that period we also realized that we had to link with other softwares and that's where when we started in 2008 to also incorporate OJC standards. Now here's some functionality pretty much similar I would say again to grass. Here you see an example of where we have different layers and the bottom layers the OSM then and on top we have temperature in this case and what we can do is also here do a path measurement and that's the line there on the left and then you can do all kinds of yeah histograms graphics and whatever you like to do there. We have several modules also incorporated once on disaster management and this is one on spatial decision support system where you can wait different scenarios also by means of a slider as you see and at the top left there and then you can apply different ways to different types of tasks and then you can see different scenarios in terms of maps here at the bottom. So just one of the functionalities that have been added some time ago. We do also have space time cube functionality. So here you see an example of black death happening in the last century so don't have to demo here but if you would like me I can show it to you. So you can rotate the cube in any direction and then also the bottom layer the reference layer you can move up so with time so time is on the vertical axis. Here you see another example of pedestrian research project where you see also again on the Z axis the vertical one pedestrians walking in a particular area and then you can move these layers like the topographic layer and there's also a thematic layer which could be land use also and then you can exactly see at which positions in time that particular stream of pedestrians were. So there are many applications in fact we also now are applying that to a project called Envirocar which is a crowd sourcing project where people have small OBD device in their car and then also track their path of driving and at the same time this connector is then storing engine parameters like rotation and exhaust so this is for environmental research. Another module that we have developed is Geonetcast so this is a module which is able to manage information coming from different satellite imagery channels. So you see here an overview of the different satellite coverages so it's pretty much world coverage of different satellite data streams that we can get into devices such as these ones. It's actually again a quite low-cost solution and not surprisingly because we are again implementing this technology in developing countries. So in fact for a few thousand US dollars you are up and running with a cheap satellite dish a computer and a interface box. So you see here some pictures of training and this is actually what we call our control room at ITC where we can receive also all the satellite imagery and do basic processing. So this is the what we call the Geonetcast toolbox that this has been implemented inside the ILWIS software and where with which you can manage the different satellite data streams you can select to pre-process and so forth and then you can use ILWIS as the base software then to display the different layers and then do further processing. Also here you see again that it is also combined with a WMS layer. So this has been quite a success for us. There has been many projects that have been implementing Geonetcast and that's also the way that this kind of our business model that we try to get this funding from the different projects also to develop the toolbox further and also to implement that in several countries. So of course we're trying to do this ourselves but by means of the training and training the trainers people in Africa can do that on the long term themselves. So next step after some time real so again realize that yeah making your software open is not the only way to get more functionality in it and as we also have quite a limited developer base we realize that we had to make the software also more modular and to invite more people to develop. We thought yeah C++ and so that's the basis for the software. They're not always yeah people are not so skilled and they're not so many people who are able to actually help us with this development. So we decided to open it up in a way of modularizing it, refactoring it and also develop yes also a Python API. So we are now in the process and that was just the recent start of getting the software more fit for the future. It's a bit of a hard thing though because we can't find easily the funding because if you go to a project and say okay yes we have this nice software and this is how it works and now we are going to overhaul it completely and we need your money to do that and then the project says okay but we're only interested in that particular part and we don't pay for that basis. So we are trying now we're getting some funding from our own Institute and we're gradually coming there to build this new basis with the small developer base that we have and with internships and so forth. So this is the architecture we are looking at at the moment. So we have in the central part of this figure this diagram a kernel we call that the ill with objects engine and in fact the whole process that we are trying now to to refactor the software we call ill with next generation ill with ng but the final project will be called ill with objects. So at the bottom part you see we are now going to provide different data connectors so we are able to get different data sources and they will be then transparently built and are going to be used by anyone who is using a particular data source and is not actually even aware of that it is either approaches or or WFS or any other source. So we are also building these process connectors for different processing parts like WPS and so forth and then on the top end you'll find the different let's say user interfaces so via Python API we will be able people will be able to access ill with functionality and then the desktop is also important because of course people also want to see the results of these processing this functionality and to see a map. So that's something that we are now kind of ready to start also to get the functionality of the what I would call the current ill with into the new setup. So modularity and extensibility is important in top bit of course but we also think that we are providing high performance processing for satellite imagery because the back end of the ill with engine is really constructed in such a way that it can handle large satellite imagery and raster data. The ease of development is of course also an important issue because we would like to extend the software not only with our own developer base but also outside and then also support web and mobile users. I will come back to that a bit later. So this is currently the development the things that we have done already so we have interface of course with GDAL and WFS and then we have now almost this Python API ready. Short-term things that we are working on now is post-gis and also on the ill with applications as I said that we had already in the current ill with and then so the current ill was actually called ill with 3.x so 3.8 is the latest version and the new framework based ill with is going to be called ill with 4 and our implementation is in Qt. So an example of the Python API so the good thing that is that we have our functionality as we already have it as I said so here's just an example of calculating NDVI and what we are doing here is also using reusing NumPy library and that that's the big win I think here that people can not only use our functionality but combine it in a fairly easy developing environment with other types of functionality. So our ill with next generation can be found here on Jithub you find some of the document or yeah all of the documentation here and if you want to get to start with it yeah just go to this website and then there are several tutorials already available. So this is the new the next generation if you want to learn about the current ill with 3 desktop applications you have to go to the 52 north side and then go to the ill with section. Okay object diagram not going to zoom in here but it's available also on the same website if you want to know the actual details behind it. So as I said we are involving in turn internships people like student assistants as also to help us with the development and recently we also had a Google summer of code candidate and he did quite a nice job developing a mobile application based on ills I have it here if you want to see it working then give you a short demo. Actually what is it? It is a universal data collection application with which we can do collection in terms of for instance water point mapping land use mapping disease mapping registration of cholera patients registration of malaria patients and so forth all in one app and then with a flexible template structure so if you need it for one application today and then the next application tomorrow you can change the template and these templates will be so they currently it's under construction of course but they will be shareable also between different platforms and different applications. How do we do this? These templates will be based on ontologies so what we are currently implementing in a project which is on water mapping is the relationships between the different relevant classes so maybe we can zoom in a little bit here. So you see the different actors here different devices and also the different classes for mapping functionality of water tap so these are all defined in this ontology here and we are now in the process of using this ontology as a basis for creating the templates in this particular application. So the actual future for ILSWIS and then version 4 is to actually go on with these use of use and then make it also possible for people to do the processing and the mapping in one interface and we are planning to implement a kind of model builder type workflow system here where you will be able to choose your data layers from the catalog on the left hand side and then create workflow in a diagram as you see this is a mock-up so it doesn't exist yet and then you will be able to drag and drop and then to change and to move these things around and then in real time we need to have then the map available and you see this works like a slider kind of interface. So that's the near future and then we have some things that we need to do some headache things maybe we are going to create a web interface also that we are not stuck to the desktop as we are at the moment and then we are planning to also make use of QGIS in the sense that we are going to develop just similar to what GRAS has done already and and Sackstante to make our toolbox available within QGIS interface. We are not there yet we actually think that we might lose some performance there doing this so we will have always also a separate software there but this would be good also to increase let's say the user base somewhat more. So improving developer community that's why I'm here also to talk with other people to see how we can actually do this and then yeah documentation. Okay thank you. Yes. Thank you for your presentation. Does your application have any ability to deal with LiDAR data do you process that at all or do you have any modules that are specific to that data type? I think currently not. Short answer. Would you be interested? Okay. Can try to maybe talk afterwards. Questions? Yeah another one. What does it use as the underlying so you talked about vector and raster formats and the number what what types of data modules does it port to for instance could it does it read from Postgres what types of databases and file formats are you able to incorporate? Yeah Postgres of course as I said well we are currently having a kind of transition from the deal with three to deal with four version so we're using GDAL also as a library so we can actually enter all the things that that are supported by GDAL and then yeah WFS so that's quite a variety of all these sources that we can handle. And yeah naturally they will also all become available in the next version as well. No? Yeah okay thank you.
|
The Integrated Land and Water Information System (ILWIS, http://52north.org/communities/ilwis/) is a GIS and remote sensing software integrating raster, vector and thematic data set processing into a desktop application. ILWIS is hosted under the umbrella of the 52North project and managed and maintained by ITC, University of Twente, The Netherlands. ILWIS is currently subject to a significant refactoring and modularization process referred to as ILWIS Next Generation (ILWIS NG). This will increase attractiveness for developers and lowers their entry requirements. It will provide a sustainable code base for the next decade and allows for integration with other open source software. Beneficiaries are researchers, educators and project executers. It will allow them to use GIS and remote sensing functionality in an easy and interoperable manner on a single desktop and in a web and/or mobile environment in order to integrate their work with others in a standardized way. Based on requirements analysis meetings with a small team at ITC, an architecture was created to host the modular components of ILWIS NG. The implementation of this architecture was started in 2013 and comprised the creation of the QT-based core software centered around a plug-in concept which supports connectors. This supports different data formats and interfaces to other software packages. As first extensions, a Python API and WFS have been developed and data connectors to PostgreSQL and OGC's SWE are underway, as well as a flexible mobile app environment, making it possible to configure lightweight GIS apps within a very short time. The presentation will embark upon the justification of starting the software refactoring and will provide an overview of the new modular architecture, giving insight into the design choices which were made. The presentation will also expose the GIS and image processing functionalities within ILWIS and how they are made available in the new interoperable setup indicating the libraries and standards on which they are based. Examples will be given on the many projects in which ITC has used ILWIS already and the potential use of ILWIS Next Generation in combination with OSGEO projects in the future.
|
10.5446/31691 (DOI)
|
Well, good afternoon, everybody. Thanks for coming in. My name's Hal Mueller. I'm a one-man software shop up in Seattle. Most of what I have been working on the last few years has been iOS and Mac software. I keep one foot in the open source world. And what I'm going to talk to you today about is some geographic data production that I went through for an application of mine, History Pointer, which is an iOS app, an iOS mapping app with particular focus. History Pointer is a portable version of the National Register of Historic Places, which is a data set of about 100,000 properties in districts. It's a database that's administered by the National Park Service. It contains all kinds of cool stuff. I mean, it has old buildings, factories, historic districts. Maybe there'll be commercial buildings, a building associated with a particular person, maybe their birthplace, their place they died, a place where the Civil War ended, maybe a particular architect. If you wander around just in Portland with this application, you're going to see lots of really cool commercial buildings. There's a Minuteman II missile silo in the National Register. And there's also the houseware home on the range was composed. So it's a pretty diverse data set. It's not, in my experience, been a real friendly data set to work with because it was originated starting back in the 60s when we weren't really doing GIS yet. We weren't really doing any geographic information. So my original idea when I started working on this project or when I started working on the second version was to link to Wikipedia in order to get some richer information. So if you look at the Broadway Bridge, which is just off our balcony here, the original nomination information, the original history information, is not available at the National Park Services website. It's a pile of paper that hasn't been scanned yet. But I was able to link to Wikipedia so you can find out a little bit about the Broadway Bridge. So when we get over to the geographic data side, we have some problems. The data set is inherently noisy. It's outdated. I can go home to Seattle and I can find out that the USS Missouri is supposedly moored right across the water in Bremerton. Well, actually, that's where the Missouri was when she was entered into the National Historic Register, but she's now over in Honolulu and has been there for some time. But this is not a data set that the Park Services funded to maintain. It's certainly not funded to maintain current information. Sometimes it was just entered wrong. Sometimes there was a typographical error. And then some of these descriptions are just inherently really, really hard to geocode. And there's the last one there. If I had a map, I couldn't find it. County Road 326 between Delaware 12 and County Road 83, Duck Creek 100. Good luck. But what I have found is that the Wikipedia people really care deeply about the articles they're writing. And so what I've been trying to do is pull the relevant geographic data out of the Wikipedia article. So here we are in the convention center. And we can find out information about the Portland, which is the tug more up the river, and then go straight to the Wikipedia article. And also, I've managed to georeference the Portland, which again, is a ship. The location they have in the original database for the Portland is not correct. If we look at this Wikipedia article, I don't know how many normal people actually look closely at a Wikipedia article. It was not something I had ever done until I started this project. But there is quite a bit of pre-rigorous structure. In addition to the plain text, there are numerous info boxes. An info box in Wikipedia is software driven. You'll put in certain keywords with values. And then the Wikipedia renderer will present the information based on the requirements in the info box template and also based on the key values in the info box. So if you look here, we have, in fact, a standard information box for National Historic Register Properties, which gives us the location, often the coordinates, significant years. It was the governing body when it was admitted. And this catalog number, that 9,700, 0847, which is its unique identifier within the National Register. So what I'd like to do is a couple of things. First, I want to link the relevant Wikipedia articles to the appropriate National Register properties. Because if I've done that much, now I've enriched my product. I've gotten a nice Wikipedia description for the property. And then I also would like to pull out the coordinates if they're available. So the coordinates are embedded in this Wikipedia markup. And it looks like it ought to be pretty straightforward to pull out. And it wasn't always straightforward, but it worked out pretty well. So Wikipedia URLs have several forms. There's the normal Wikipedia URL, which you would type in. Most of us, frankly, aren't going to type in a URL. We're going to hit the home page, and we're going to hit a search box, or we'll do a Google search or something. But this first URL, that's the normal top-level reference for an article. But then there's also a more specific reference, which is that article URL with an ID appended to it. So now what I've done is I've identified the article on the steam tug Portland, and a particular revision. So if you all go out now and tour the Portland and find out interesting stuff, and everybody goes out and updates the Wikipedia articles, that old ID number is going to get incremented. But the original URL is not going to change. And then there's also a numeric ID, which is going to reference the Portland, even if maybe the article gets renamed, or suppose we decide we're not going to have an article on this ship anymore, we're going to have instead one article on all the steam tugs up and down the Willamette River. But that current ID number, that's going to stay the same. That's the numeric ID, and that's kind of the gold standard, as far as I've been able to tell. That's the number that you want to be referring to if you want to pull a Wikipedia article rigorously. And then finally, there's an API which allows you to iterate through a series of Wikipedia articles. So this API is your starting point if you wanted to pull out all of the article titles that reference this one in the second and third line, the National Register of Historic Places. So this is where you would start if you want to pull out some subset of Wikipedia articles. So when you pull this article, you had the option then of pulling down the XML. The XML is going to contain the revision info, who revised the page, what was the date, what was their internet address, and the plain ASCII markup. So Wikipedia is not going to send you the rendered markup. It's not going to give you the ASCII text version of this thing. All right, so I've got a math degree from a reputable school. I've taken a lot of computer science classes. I'm going to find myself a parser and figure out this parser and just parse the markup. And it turns out that writing a parser for Wikipedia markup or Wikipedia markup is a pretty popular idea. There have been a lot of projects that have tried to do this, and several of them have achieved some success. All right, so looking at all of this, I thought, well, maybe I don't really need a parser. Because this is just a pattern. This is the sort of thing I'm looking for. So maybe I can just write a regular expression-based thing to pull these numbers out. The reason this quote is so famous is that regular expressions, I'm not going to give you the computer science theory behind this, but a regular expression is a really elegant way to say something, but it's devilishly difficult to get it right. I didn't listen to Jamie. The fundamental problem is that what I need to be able to do is extract these balance to delimiters. So if I see the opening brace, I want to see the closing brace and pull all that information out. With a regular expression, with the richness of that particular engine, it's impossible for me to tell the difference in that bottom line between open brace, closed brace, the first closed brace, and open brace, and all the way through the final closed brace. And so in the Wikipedia markup, we can get a real mess. We can have some nested info boxes. We could have maybe multiple historic properties that are relevant to one article. The coordinates might be in a different info box. We see this especially with battlefields, where the main info box is going to be the battlefield info box. That's going to have the coordinates. And then, oh, by the way, this is also in the National Historic Register. And we also just see badly formed input. You're not going to go to Wikipedia and hit a page, and it's going to say, sorry. There's a syntax error in the markup. I'm not going to tell you anything. Wikipedia always is going to display something. But what I ended up doing was, this was plan B, was going for what I called a good enough regular expression. That is, instead of trying to write a perfect parser that understands every nuance of the article, I just need to throw away the stuff I don't care about. I need to throw away everything but the NRHP info box. And then, within that, throw away everything but the catalog number and the location. And so that ended up working pretty well. So when I ran that API call I showed you a few slides ago, I retrieved 64,000 articles that refer to the National Historic Register. After running through this good enough processing, I ended up able to handle about three quarters of them. I ended up with 16,000 orphan articles, articles that I couldn't match to a particular historic property. And that's going to be an artificially high number, because many of those are articles that are all the National Historic Register properties in King County, Washington, all historic ships on the West Coast. So I could maybe, dawn down the road, maybe I can exploit those articles. But these are not the one-to-one matches I'm looking for. So this is pretty good. And I was pretty happy with that. And that's where I was as of submitting the abstract. So I could stop here, but I've got another eight minutes, so I'm going to keep going here. So a while back, a couple of months ago, I discovered Dbpedia. And I've learned a lot more about Dbpedia in the last two days than I knew when I walked into the building. But Dbpedia is a project based on the notion of a semantic web. It's parsing all of Wikipedia and creating RDF articles. So creating facts in a very formal specification, creating records of the facts that are in the Wikipedia articles. This project has been going on for, I want to say, about 10 years. It's quite mature. They're up to version 3.9 now. But all of their extracts are going to be in the forms of triple, so it'll be a noun, a predicate, a verb. So Tug Portland is a ship. Tug Portland has latitude of 45.3 north. Dbpedia also has a Sparkle endpoint. Sparkle is an SQL-ish looking language that lets you do these semantic media queries. So I'll post these slides, but I wanted to give you a couple of quick links to get back to Dbpedia, because there's a lot of other information in there. The two big ways to get to it are either through their bulk downloads or through the Sparkle endpoint. All of this stuff is based on that numeric Wikipedia page ID. They've also got some canned data sets, all of the mountain peaks, large cities, various themed tables. There's about 500 of them. Unfortunately, none of the 500 were of direct use to me. But I want to make just a quick aside. If you're interested in this semantic web stuff at all, there have been two great talks here. One of them was yesterday talking about using semantic web for humanitarian assistance. And the other was this morning talking about pulling words and phrases out. So if you're interested in what I'm saying now, you're also interested in going back and watching these two talks on the video. So the Portland reference in Dbpedia is pretty easy to follow. So here's our original Wikipedia article ID, the Steamtag 1947. We can start from that a URI, which is dbpedia.org. That's not really something you can open in a web browser, but that's the specification that you use to search on. That's the key that they use for all of their triples. And then we also have a page link from the Wikipedia page to a different project, wiki data, which I'm going to talk about a little bit later. So here's a nicely formatted version of all of the facts that dbpedia was able to extract from the Portland's Wikipedia article in retabular form and then displayed in a human-readable form. Here's what some of the triples end up looking at. So you notice we're keyed always on that dbpedia.org slash resource slash name of the ship. And then the second element is going to be what predicate we have, and then finally the value for that predicate. So I chose to work with the downloads file because I didn't want to have to deal with lots and lots of network queries to sparkle. I didn't want to have to learn sparkle. So I'm loading all of the relevant downloads into an Objective C program and then querying that based on page ID. So page ID gives me the dbpedia ID. dbpedia ID is now my key into these other tables to extract the information that I care about. The information that I care about, in addition to the coordinates, was the abstract of the article, short abstract of the article, some of the media links. All right, so how are you going to do this for your own project? You've got to start with an idea, some theme that you're going to explore. And now you need to find either a template or a common article, some way to pull out all of those relevant Wikipedia articles. From those numeric IDs that you pull out, now you can go into dbpedia and grab the relevant facts. This is going to be your key to the abstracts and the text version, the title, text version, the abstract, any other properties. So some numbers here. I end up with about 36,000 dbpedia articles that referenced back to the National Historic Register. We have two different downloads in dbpedia. One is what they claim to be geocoordinates, and the other is what they claim to be keyed bits of information, this mapping-based properties. There was about an 80% overlap between their geocoordinates and the actual coordinates that were in this second file. I haven't figured out what the reason for that is. But the bottom line is you need to look at both of those. So the net change for me was another 1,500 properties that were georeferenced. Some of the figures that I was able to fix were articles where I had the location, but I didn't know what property it matched. Some I could match the property, but I didn't know what the location. So this is a decent improvement. So I also wanted to point you to a similar project I haven't worked with at all, but I'm aware of it. It's called WikiData. It's another semantic web project. Their idea is to have two-way communication. dbpedia is reading Wikipedia and then creating product, and that's the end of it. The notion of WikiData is you can edit the database, and then that automatically updates the info box in the relevant Wikipedia article. I haven't found an API. I haven't found any good downloads. But here's what the WikiData entry on the Portland looks like. And in fact, they were able to identify the property and they were able to identify the location. So to summarize, Wikipedia has some pretty well structured information if you know how to find it. If you follow the rules, you start with a list of Wikipedia articles generated somehow, maybe manually, maybe by following a formula. And then you've got a choice. You could either extract that with your own parser. You could use the dbpedia or dbpedia downloads or Sparklin. Maybe you can get something out of it in WikiData. Fundamentally, dbpedia is not human readable. The updates to it are pretty tedious. The downloads only come out annually. So I've been working with records that came out last August. WikiData appears to be pretty well funded. I think they've got some Google money and they've got some other pretty heavy hitters. But from what I've been able to see so far, WikiData is not very far along yet. But I guess the challenge I want to send to you is that this conference, we tend to spend a lot of time thinking about geometry. Geospatial is not just about geometry. It's not about projecting rasters and points and vectors. It's about getting information. And so this is a pretty handy way to get information into the hands of your users that's already organized geographically. So we can take a couple of questions, wait for the mic, or I can read the question. Thank you. Have you looked at Wikipedia to help you find these places? I have not looked at Wikimapia in detail. No, I haven't. There's one up here. Two up here. Sorry. Do either API find a way to make it Wikimapia this? Do the APIs find a way to, if you have existing coordinates to see what Wiki articles are around you, or do you have to start with a known article and get the coordinates of that particular article? From what I've seen so far, there's no geographic search in Wikipedia or Wiki data. There might be some geographic search capability if you hit the sparkle endpoint, but I haven't tried to use the sparkle endpoint yet. Yeah, I was just curious. You said you got about 1,500 X results using the dbpd approach. But were there any results that you found using your regular expression search and not with dbpd? I think there are some results that I found with my regular expression code and not dbpd. But I'm not sure what that means, because I'm working with, on dbpd results from a year ago. That is, they captured Wikipedia a year ago. I'm looking at articles that have been updated since then. So I am seeing some results that I'm getting that dbpd is not finding, but I don't have an explanation for it. OK, well thank you all for coming in.
|
A large fraction of Wikipedia's millions of articles include geographic references. This makes Wikipedia a potentially rich source for themed, curated geographic datasets. But the free form nature of Wikipedia's markup language presents some technical challenges. I'll walk through the Wikipedia API, show how to get to the various places where spatial info might be found, and show some blind alleys I've followed. Examples are from a project that uses Wikipedia to enhance a map-based iOS app of some US National Park Service data.
|
10.5446/31693 (DOI)
|
Okay, I think we're going to get started. Thanks, all of you who came for coming. My name is Carl Sack. I'm Julia Ginneke. And we are from the University of Wisconsin, Madison, Julia, who just graduated. And I'm a grad student in the cartography and GIS program at UW-Madison in the Geography Department. We are going to be presenting our process for building a, what we're calling a mobile situated learning module. It is a responsive map based web application that was really developed to be used with an international studies course. We had a faculty member who teaches an international studies course on globalization who wanted a web application that students could take in the field, and they could use on their phones and tablets, portable devices, to sort of provide a guided tour of sites in central Madison that demonstrated an economic transition from Fortist modes of production to post-Fortist economies, neoliberal economies. And so we designed and built this mobile module for him. It also has a desktop version, and we will get into some of the issues with how to create an adaptive and responsive application. So I'll turn it over to Julia for now. Okay, for our module, as a seminar, we had a couple goals in mind. We wanted to learn how to design mobile maps. So it's estimated by 2016, 80% of Americans will have a mobile phone. So that means that mobile phones are pretty much ubiquitous, and mobile maps are a couple finger presses away. So we really need to learn how to design really practical mobile maps. What also makes mobile maps really powerful is location-based services. So they provide users with customized experience based on their current location. And a second goal we had in our seminar was to, being in an educational university setting, we wanted to create educational tool. And we wanted to take advantage of the nature of mobile devices to create a more alternative tool where we emphasize situation, situated learning. It's a process where the activity in the user context is more emphasized. So as opposed to learning in a classroom environment, like for our module, we put our students in the environment, and they'll walk around, look for buildings, and read up on their history, and kind of tie it to globalization concepts. So it's a tool to accommodate different learning styles. So for mobile maps, you might want to think of what limitations a mobile device will have, and what it will impose on the maps. So a hardware is very obvious one. So mobile devices have more limited bandwidth, processing power, and definitely battery life. So you want to consider all that. But a main thing that we would consider in our design process is definitely the small screen size. So the small screen size, if we don't design accordingly, will potentially cause a lot of screen clutter. So that's something that we really need to emphasize. And something else that makes mobile devices, mobile maps really different is the location-based services. Because of that, there's all these other functionalities that come with it. So locating, searching, identifying are all some things that a user may be doing with their mobile maps. But they also may be doing all these at the same time. So there's a lot of multitasking, and the user may be more distracted. So you want to take that into consideration. So maybe put a warning sign when there's traffic, or something like that. And so when we design a mobile map, we really want to design, we want to consider the context. The user context, which may include the user itself, his activities, and the location and the time of the map usage. So to address all these differences in mobile devices, we're going to use adaptation as a means. And so we're basically using the concept of adaptive web design. To kind of give you an idea of what the difference between responsive and adaptive web design. I'll just talk a little bit about it, because there's two concepts that are a little bit... A lot of people may be a little confused about. So responsive web design is basically an idea where it's an alternative to making device-specific separate websites. It was created by Ethan Markott, and it uses a combination of fluid grids, fluid images, and media queries to address different breakpoints. So it basically allows for similar content to be displayed effectively across different dimensions. So here's an example of responsive design. So in our module, we're utilizing more adaptive design. So responsive design's basic underpinning is based on the broader concept of adaptive design, where the same information is made available to both desktop and mobile users, but it's more customized. So it's through different representations based on the user context. And other than including adaptation on the layout, when we're talking about adaptive design, there's also adaptation of the information. So the amount of the information or the level of detail can be varied, for example. And adaptation and user interfaces. So certain functionalities can be present on mobile maps, but not on web maps. So a good example is panning. So you tend to probably want to pan more on mobile maps. And searching as well. So with the location-based services, we tend to want to probably use searching as a functionality more on mobile maps. Another example is in action mode. So for mobile maps, we might use the touchscreen. While on web maps, we might use the mouse or keyboards. And another thing we can make adaptive is the visualization or the presentation. So it could be the base map generalization can vary the size of the icons or the different text style and size of text fonts. These are all a couple of examples. So now I'm going to talk about, or I listed out some elements or movable parameters of adaptive cartography. And layout is still a very important one. So there's two main types of layouts. Fluid map layout is what we see more in mobile maps. So that's where the map will basically take up most of the screen space. And all the other elements are kind of floating around the map. So it will help conserve the screen real estate. While the compartmentalized map layout is seen more on web maps. So you can see the central map isn't taking up as much space. And all the other elements kind of are in their own different panels, like the menu or the legend. And I also, I listed a bunch of interactions that we may be able to vary. So as I said earlier, pan and zoom and search are something that we might make adaptive between mobile and desktop maps. And map elements are composite that make up the whole map. So I'll just give you one example here. So in mobile maps, we often see that the title is included in the splash screen. So that will be a really good way to set the context and give a user idea what the map is about. And once you enter the actual map, you won't really see the title again. And that's another good way of conserving space. Projection is something that should be made more adaptive, but it's not really made adaptive a lot. It's because mostly use web marketer and just because it's a rectangular projection and it fits the browser really well. But it's notorious for its distortion and the medium and higher latitudes. So if you want to make a world map or a continental map, then it's not a really good choice. But for our project, we're just making a pretty local map. So we ended up just using marketer. And when you design icons, you want to make sure for mobile maps, you want to make it more iconic or associative. So it's something very clear. If you make it really realistic, it might become more cluttered. So the more simpler, the better. And for typography, we could adjust the label position, the font size and the font style. But specifically for mobile maps, we want to try to use a very clear font. That's usually sans serif and it's very fluid. And finally, for mobile maps, audio is a very good alternative. So it's a good alternative to text. So it will help the user focus on his surroundings as opposed to reading a text. So we'll see that later in our demonstration of our module. So I'll just jump to mockups quick and kind of just give you a really overview of how we did it. So we first built up our narrative of how the application would work. And we created initial sketches of the mobile and the desktop version. So still one-to-one information content, but different layouts and designs. And after that, we refined it based on a group process and we utilized the light table and dry eraser markers. So after that, we created our Hi-Fi prototype and Illustrator and then our UI UX team were able to create the application based on that. So now I'm going to hand it to Carl. Okay, I'm going to talk a little bit about technologies and then get into a demo. The technology we stack we use, the software stack we used, was very much HTML5 and web dependent, browser dependent. We did not get into programming native mobile apps for this purpose because this was a graduate level cartography seminar. Our program has only been dealing with code, you know, with HTML and web standards really for a couple of years. And we're still very much on the learning curve in terms of catching up to a lot of you all. So we didn't use PhoneGap or anything like that to try and make a mobile native app. We just stuck to the browser for this, for simplicity's sake, if you can call it that. We did use, make use of the HTML5 standard in particular, some SVG elements as well as the application cache feature of HTML5. And I can tell you all about how the benefits and limitations of that feature. LeaFlit was our UI container for the map and we used jQuery in terms of DOM selection and stuff like that. D3 we mainly just used to load, efficiently load data into the DOM as well as a little library called Q that Mike Bostock also made, which I highly recommend parallelizes the asynchronous loading of data in a great way. And then on top of those we used a couple other UI libraries, Foundation, which is kind of a monster for window modules and slideshows. And then 2020 is a little library that provides a cute slider element that you can do before and after photos, which we wanted to include. So I'll just go to the demo here. And so this is our splash page for the desktop version. We have a splash page for both. And you can see there was a problem loading the offline cache here and that is because we're in hotel internet land and it has an issue with redirects to external resources. The cache does so if anyone has a fix for that, I'd love to hear it. And then this is a LeaFlit bug that's making this map freeze, which I haven't figured out yet, but we have every so often it does that. But we have a working version up and running here from the geography server. This is what it looks like after you close the splash page on the desktop version. You have a pop-up window with some narrative text. And this is optional read aloud. Did I hit it? It's the volume on. I don't know if the volume is on. Go, go, go. I used to be on that. Hello. My name is Stephen Yohle. So there's the text. Once they get through, so that's meant for students to get sort of the introduction to their assignment to Madison as an economic context. We have some occasional alerts for heavy traffic on street crossings. And then the route to the first site. Once the first site is open, you have a slideshow, which has each slideshow has three different photos. My name is... And live demos always grew up. So this is that 2020 slider here. And so there's three questions, three sets of questions that are trying to engage students in thinking about the topics of the module and each site they're at. And each site has a theme, labor, transportation, housing, and power are all themes that are addressed by this module. And so the assignment that students have is to go through this guided tour and come back and write an essay on the places that they've been. And sort of the meaning that those places have in the broader global economic context, how Madison is connected to the world. So you have the Chinese Nike factory replacing the local shoe factory and a map of Chinese production of shoes where shoes are produced, et cetera. That's for labor. So stuff like that throughout the module. Once each location is accumulated, you can go back to different locations. So if I proceed on to the next site, then I could return to the shoe factory and go back there. There's also ways to access different parts of the module from within it. If I can minimize zero, how do I exit full screen? Well, is there a way to minimize this or to get it? So I don't want to minimize. I just want to reduce the size of this. So you can see the layout is fluid and changes when you get to a different dimension where we have the menu down here for the mobile version. And audio should automatically play with each new site. So you can see that the audio is actually being loaded with the same load. The expansion of industry and transportation networks also depended on increasing the supply. Part of the delay with the audio is in order to get the cash to work on the mobile version of the site on Android and iOS. So that's causing a slight delay. So lots of learning experiences with building this site. I will go ahead and hand it back to you, Julia, to wrap up. So basically we just wanted to thank everyone in our cartography seminar. A couple of you guys are here today. And then as well as our professor, Robert Roth. And Steven Young is the globalization professor that we work with. So this is pretty much the team of people who put this all together. So I think that's the end of our presentation. And we still have maybe five minutes. So if you have any questions, feel free to ask us either now or afterwards. Questions? Yeah. Yes. Can you repeat the question for the last one? Yeah. Do you have the code on GitHub? The code is on GitHub site right now. I think it's on Roshana Mead's GitHub site. Who, Roshana's back there. So which is github.com slash Roshana Mead slash global Madison. We will, let's see, we'll probably end up migrating that into a different GitHub account. So if you're interested in it, I would say the best way to get the link is to email one of us. I can give you my email. It's cmsak. That's cmsack at whisk. W-I-S-C.edu. And I'll be happy to send you a link. It looked like you were working with a light table on your mock-up portion. Doing similar work, what would you say is needed in terms of kind of the infrastructure? Did you work with the graphics department for font selection? Did you work with the light table versus a whiteboard? For the light table, it was just a really nice environment for maybe like ten of us to be around. We actually just started on one side of the table and kind of just wrapped around. So I thought that was a really good way to interact with our team. But if we didn't have that whiteboard, it would be a good alternative. So I guess it depends on what we have. Do you have one? Yeah, I mean, so that picture of the light table that you saw was, or the pictures of the light table. This is inside the UW-Madison Cartography Lab, which is a full-service laboratory, cartography lab that deals with clients and does contracts with clients and all that. And there's been, of course, a huge technological shift in mapmaking over the past 20 years. And this light table was sort of rescued from the previous technologies that were used to make maps. And it turns out that repurposing old technology sometimes has benefits. Like Julia said, it was just a really nice space to work around as a team. The other questions were, what sort of external resources did we utilize? And honestly, not a lot. As a cartography program, we seek to incorporate those knowledge bases into our program. So we have specific ideas about typography and general design. We are a design-based program, so we have a lot of design expertise in the program. We did make use of a book by Ian Mulenhouse, Professor Ian Mulenhouse. It's a mobile web cartography, this book, which just came out last year. And it's a great little introduction to just more of a proposal for what mobile web cartography could and should look like. Some proposed principles, a lot of which aren't yet grounded in empirical study and could be. Others? I was just wondering how many students ended up using the app. Was there a way for them to enter in their thoughts as they were going through the demo? The application will actually be used by students for the first time next week. So hopefully that leaflet bug will be resolved by then. We actually discussed and debated whether to include a user input component to the site, but that would have involved setting up a server-side database and really more work on the server end than was within the scope of the seminar. So we were really just front-end UI designers. Also, the assignment for the classes is more essay-based. The homework that Stephen likes to give is more essay-based, so it would have not really fit. Did you build in a way that you can repurpose it for other professors, for other content? Yeah, it wouldn't be too hard. We're loading the data through Ajax. Everything is fairly dynamic and object-oriented. I think it would be too hard to repurpose. So I'm Kristen Mott, coming from Portland, Oregon. Julia, I have a question for you, just so you can switch up who's talking. Do you have a sense of how people might use this going forward? Do you think that your fellow students will use this kind of project in other ways, or do you think this was more of a learning experience than a one-shot deal for them? For us as a seminar, or for the... It's definitely a good learning experience, but personally, I would use something similar. I'm really into educating people. So this is a good way to kind of learn how to design for mobile maps, just because I feel like it's very necessary and it's very easy these days. So I would definitely use this more as a prototype and maybe even go even more forward, because this is our first application. So the more we design, probably the more we'll figure out. So it's definitely something I would do in the future on my own, or work with different people on different projects. Thank you. I realized that the question over here had actually asked for a number. This will be used by 300 students approximately this fall. Okay, we're probably at the end of our time slot, so I think that's it. Thank you.
|
Mobile device technology is being introduced into educational settings and is likely to become widespread as an instructional medium in the coming years. As of 2013, nearly three-fourths of American college students own a smartphone, while four in ten own a tablet, and a majority of students believe that mobile devices can make their education more effective. There is tremendous opportunity to harness these devices for situated learning, or lessons that take place in a real-world context, through the use of mobile-ready geoweb technologies. Adaptive web maps can be developed to guide students to important places—either virtually or physically—and facilitate landmark interpretation. This presentation will demonstrate a situated learning module developed using open source geoweb technologies for an International Studies course at the University of Wisconsin-Madison. The purpose of the module is to "make the familiar strange" to students in the Madison landscape, guiding them to historic landmarks and pairing those places with maps, images, and narration to explore the course of economic development in the U.S. The web application makes use of the principles of responsive web design to adapt to mobile or desktop devices, altering the map interface and modes of content delivery to fit the user's context. The mobile and desktop versions will each be evaluated to determine what adaptations effectively increased usability and whether situated viewing of the map on a mobile device influenced learning outcomes. A review of the application development and evaluation processes and results will be accompanied by a summary of lessons learned about how mobile mapping applications can adapt to their users and surroundings.
|
10.5446/31694 (DOI)
|
Welcome to my talk. My name's Paul Mock, and I'm going to talk about merging data sets and taking large sensor data sets and merging it in with vector data sets. So it's kind of a dry topic. It's not vector tiles or anything like that. But hopefully, I'll have some demos and some pictures and stuff towards the end to make it interesting and hopefully entertaining. But first, I'm going to start off with kind of like a big picture of the problem I'm trying to solve with the tools I'm building. And so the goal is basically to merge data into OpenStreetMap and make it easier. So this is kind of like a vague goal, whatever. But I just wanted to find some of the terms to really specify what I'm talking about. So when I talk about data, I'm talking about sensor data. So not like this shape file and imported into OpenStreetMap. It's more like I have these billions of data points and there's information there. And I want to transfer that into another data set, specifically OpenStreetMap in this case. And then the data, so that's like the data coming in. And the data I'm trying to fix is the geometry. This isn't for importing addresses or building or land use or something like that. It's to fix the roads and trails primarily. So that's kind of like the data I'm talking about on merging. And then when I talk about easier, I'm not trying to build a command line tool. And I don't want it to be like tons of pointing and clicking right now, like just map tracing on top of a map. So somehow semi-automated helping of going from the information in one data set and merging it in with the OpenStreetMap stuff. And so why OpenStreetMap? Well, why not? I mean, we can talk about the philosophical stuff of it. But it's used at my work where I work at Strava. We use it for routing. So it's kind of a cross benefit of improving that data set. Kind of helps everybody. So what kind of data set am I talking about? Now there's a bunch of different examples. Specifically, since I work at Strava, we have this large global GPS data set with hundreds of billions of GPS points from millions of runs. And for those of you not familiar with Strava, it's a fitness tracking website, online network for athletes. Basically, the flow is you turn on the app, you put it in your pocket, you go for a bike ride. When you're done, you upload it, and we shower you with beautiful experiences. And it's a cool app. But what's relevant here is we end up with these billions and billions of GPS data points that you start to wonder, what can we do with this? What kind of information is hidden in these numbers, or these basically lat long points? So the first thing that I did about six months ago was just take all these billions of data points and put them on a map. So here's an example. It's just a basic heat map of that. And here's another example of Europe. But it's not just a population density map, but it does go down to zoom level 15. And you do see there is some information here of where people go and where they don't go. So just to finish up on the heat map stuff, it currently has 22 billion points from March. And so I hope to update it here in the next few months with more data. And there's a run and ride version. And I showed you screenshots here, but there's a slippy map version online where you can zoom, pan, all that good stuff. And it's technically not like an open data set in Strava stuff, but it is available for browsing and for tracing in OSM. So we try to balance the needs of the business people at Strava want. And there's a good idea with what can we do to open up this data for mutual benefit on both sides. So how can we use this data to improve the map? Well, as you can see here, there is definitely information here. There's the trails. There's the roads. There's the ones that are more popular, the ones that are less popular, the parts of the map that people just don't go in, or that cyclists don't go in. And so one thing that we're doing, this is kind of an aside, is we're mapping all this data to road networks for cities. So cities have come to us and been like, hey, look, you guys have a lot of cycling data. We want to improve our infrastructure based on data. Can you guys help us out? So there's this team. There's two guys that are working with cities to kind of, the city provides, every city has different requirements, but they provide their own road network from their GIS stuff. We map all the cycling data to that until you like, time of day, number of users, number of rides, like all that great stuff, and then it's like a GIS product out. So that's called Strava Metro. And it's kind of the aside, like the pitch, the advertisement that may be relevant to some of this audience, but that's not what I'm here to talk about. What I'm here to talk about is this tool that I built called Slide. And you'll understand in a few moments why it's called Slide, and the idea is to take that information that's in the heat map and bring it into OpenStreetMap specifically, bring it out in kind of like a meaningful, fast way to just more automate the map tracing that's going on in a whole at OpenStreetMap. So let me show you an example here. So this is a page that I built to just kind of like show off Slide. And what you're looking at is the standard OpenStreetMap base layer in all its beige glory covered up with the blue, purple, red heat map data for that area, for this one area that's in NorCal somewhere. And if you look closely, you'll see that there's no trail that corresponds to that heat map stuff. So what you can do, the way Slide works is you can outline of this trail and then click the Slide button. And it'll match the heat map data. So the idea is five clicks versus 100 clicks to get this line that matches up the trail. Because if you've ever been mountain biking or running or whatever, trails are nice and windy. And that takes a long time to sample properly. So as the input, you have this coarse black line. And then it iteratively improves that black line. It slides it into place to the heat map data. So that's kind of where the name comes from. It should just off the back. It's a server side tool. It's not running JavaScript. So it does do a round trip. But it is pretty fast. It takes about a quarter of a second to run this. The animation takes a little bit longer. But it is quote unquote real time. Oops, let me move this. Sometimes it does depend on the input line being close to it. And you can see how it's kind of fun to watch. So here's another animation that just backup, the backup animation. So how does it work? At a high level, you can think of all the GPS data as a density distribution of where people are. So there's places people go all the time, like on the trail. And there's papacy people that never go, which is 10 meters off. So with that data, you can build this density distribution surface like this, where the high density corridors will be lower and the other places will be higher. And then you can take your input polyline, the black line from the previous example, and consider it kind of like a string of beads and lay it on the surface and just let gravity do its thing and slide down into the valleys. So that's kind of like the model that I was thinking about in my head when I developed the tool. Like that's the physics, not necessarily the physics, but the concept that I want to model there. There's a lot of overlap between all sorts of stuff in science. Like this is kind of based off of mathematical optimization where you have a cost function and you want to iterate over your function and improve the cost, lower it in most cases. So in this slide, there's three cost functions right now, or three components to the cost function. And one is obviously the depth of the surface, like you want to go lower in that. Then you want to make sure that points are equidistant and that the angle doesn't get super sharp in the line. So that's just to maintain the rigidity of the line to keep it from collapsing in on itself. So those three costs are computed every time. So this is a complicated slide or maybe too detailed slide, but I put it in there so it's in there on the online version. But the basic concept is you input the line, you input the heat map data, and then you go through this loop where you iteratively try to improve the cost. And so once you improve it where you can't anymore, you simplify it down and you output the result. So it's kind of this irate refinement process of matching what you put in with this course sample and making it right or better, or in some sense, transferring the information that's in the Strava data into your polyline. So here's kind of a gist. It's server side written in go. It can leverage any raster data set. I think currently the one that I'm using is the Strava one, because I think it's the most interesting. But I have some other examples that you can use. It's an iterative repinement process, which is kind of cool, and it's reasonably fast. So you can work as a web type script. And so I first presented this at the state of the map conference a few months ago and incorporated this code into the ID editor, which is the default OSM editor. So instead of just that demo that I showed you, you can actually add data to the OpenStreetMap using it. And since then, I haven't done the best marketing on it, but 200 people have used it. There's been 6,000 change sets using this editor, which I think is pretty significant and cool. So in the ID editor, the flow's kind of the same. So you can connect up here. For those of you that have used it before, you can draw your course overview line, then tag it as a general path or whatever you like. And then you can click the little slide icon, and it'll do the same thing. So there's two ways to interact with it in the ID editor. And one is to select a subset of points. So here I have three nodes on that way, and it's going to slide the portion in between those nodes. So in practice, you tend to have a really, really long bike path. It's a super long way, and it's best to just slide portions of it and walk along it. So that's one way to do it. Or the way that I showed you, I'll slide the whole thing, which works in this case, because it's relatively short. But yeah. So that's sliding to StravaData, which I found very useful. From a company standpoint, we're trying to do routing based off of OpenStreetMap data. Every mountain biker wants to route on their trails. Those trails aren't in OpenStreetMap, so we can't really provide a solution to them. So we want to improve OpenStreetMap to improve our routing, kind of like a win-win. And as a bonus, we want to leverage our data to make that easier. And that's kind of the birth of what Slide is. I'm trying to kind of think of this concept as a higher level than just how do I get StravaData into OpenStreetMap, but how do we get other data in? So these are some of the data sets I've been playing with. And just to be able to merge this data in in kind of a semi-automated way. That's part of what I'm trying to show is it's not just the algorithm. It's the incorporation with the editor that makes it so that you're still doing the same thing just faster. You still have a person looking at it and verifying that something dumb didn't happen. But it's way faster than before. Because what I don't want Slide to be is this command line prompt where you press Go and it commits 1,000 things and you don't know whether it was right or wrong or what. So this approach I'm trying to take like that middle ground of semi-automated. So the other place that I've incorporated this algorithm is to TigerData. So Tiger is the US Census stuff that they put out. And what has been merged into OpenStreetMap like a few years ago in 2007, 8, or whatever is the old stuff. And since then, counties have improved their TigerData, but it's unclear how to merge that in with what's already in OpenStreetMap. So here's one example. This is like a screenshot from the ID editor where you have the white and the green or the OSM ways. And the yellow stuff underneath it is the new TigerData. You can zoom in and look at it. Nothing's perfect, but it's a hell of a lot better than what's there. Can't we just have OpenStreetMap slide to that? It's basically the concept. The ultimate thing would just be like, yes, this looks right, do your thing, and fix it. So what I have now is kind of like the first version of that. And here's some other examples where the new Tiger stuff is totally great and awesome. But what's in OpenStreetMap isn't. Like can we merge that in a semi-automated way? Because doing it on a country-wide automated thing is a bad idea. But in a semi-automated way where you at least have someone looking at every change, but quickly is the approach. So here's another one. This is kind of my favorite. The topology is there. Like it's there, but it's just totally off. And the Tiger stuff is perfect. It's right. So can't we just have that match in? And then other places you have smaller stuff, smaller changes like this. So you can basically apply the same slide algorithm. But instead of sliding to Strava heat density, you can just slide to yellowed lines. So I'm just going to do a quick demo on fixing this area. And as I was playing with this, I was like, well, as a first step, can't I just snap all the nodes to their nearest node, in this case? Which would also help to automate it. But the idea is the same. You kind of need to have a course, like background, or course lines that match, but not completely. Then you can kind of select these ways and slide them. I should have moved this one over. Slide it over. So the idea is the Tiger stuff smooth, properly sampled, looks pretty great. But I want it to be an OSM, and I don't want to point and click 1,000 times to get it in there and validate it. So here I'm just going to do the subset and test the internet gods on panning around. So you can see kind of at that intersection up there, it's not totally perfect. And that's part of the process is it doesn't change end points, so if your endpoint isn't right. But that's part of the, OK, now I can just go in and just make that small edit, and this area is fixed. It works well for windy roads, kind of those rural roads, like neighborhoods where there's a lot of just often not used roads that are windy that take forever to sample, and no one has taken the time to correct them in OpenStreetMap yet. So here's another example, just merging it like that. So anyway, so again, it's semi-automated. So the idea is to be like, hey, I'm a semi-expert at editing OpenStreetMap. I just want to be able to clean up this area super fast. And that's how it is at a low level, but at a higher level concept, I'm transferring the data of Tiger, the information there, and bringing it to OpenStreetMap in an easy way. So that's what I have working so far, and I have links, I think, on the next slide. But just some things I'm thinking about of improving this is how to get more input from the data set. So kind of the astute conference listener would realize that, hey, the Strava data is these polylines, these like 1D polylines, and the Heatmap data is just a density function. So you're losing direction and order of your data. You're losing that information. So can we pull more in? What if we knew the direction at every point on the Heatmap, or had a direction distribution, or something to incorporate direction into it? Because right now it's just taking density. And that kind of the motivation for that is like sharp turns and switchbacks and stuff don't do so well with the slide algorithm, because it tries to minimize sharp turns. So my theory, or something I want to work on next, is incorporating the direction information from that to help it. So you're bringing more info in. Better smoothing. I mean, sometimes the way the optimization happens is you might get a little jaggedy minimum. And so smoothing with that. And then more complex geometries. This is kind of my wish is, for anybody that's edited OpenStreetMap, and you see something like this where the Tiger data is perfect, but the OpenStreetMap isn't. And to you, it's completely obvious what should happen to the OpenStreetMap data. It should just shift a little bit and clean itself up. I just want a button that does that. So that's kind of the goal and the vision of this thing is just to automate that. The info is there. Someone spent a lot of time cleaning up the Tiger data. Is it there for us? Can we just merge it right in? That's my presentation on slide. Here's all the links that I have there. I don't know if you guys want to copy it down or post it on Twitter, post the presentation on Twitter. So that's kind of the slide and how I'm applying it to Strava data and Tiger stuff. But at a high level, I'd kind of want to explore techniques of merging in these non-vector data sets into a vector data set. So as big data gets bigger and there's access to that in kind of these semi-open ways, how can we merge that into something like OpenStreetMap or even a city network? How can you contract with a cell phone provider that gives you bajillions of GPS points and make that useful? Yes, you can trace over it as an underlay, but that gets really boring really fast. So just kind of these tools to make that information transfer, that merging of data, automated. Thank you. So any questions? Thanks. In your optimization algorithm, how are you deciding how many points to add to the polyline? I just resample it at every five meters. So it's just like a fixed five meter? Yeah, so I resample it because I want to make it. So the question is, how many points do I add to my polygon in optimization? So the first step is I resample it at five meter interval so I can mimic the flexibility of a string. And then so I take those, I minimize it, and then I simplify it again after the fact so that hopefully we can talk simplification algorithms, which I have this hacky way of doing it, which works pretty well. But hopefully you get more points at the curvature part and less at the flat part. So that's like the final product. Have you tried using something like this with small user groups who may not use Strava for their activities to map out areas that they go? I'm thinking people like rock climbers, hunters, where you wouldn't be tracking your activity while you're doing it, but you'd benefit from something like this to actually map the trails or areas that you go. Yeah, I think the way to do that would be to use the OSM traces. It's they have their layer underneath that and to be able to slide to that. It gets a little bit questionable if you have multiple ones that are right next to each other. Like, what are you sliding to? But yeah, the more we can make OpenStreetMap better, the better. I meant actually having people go walk around with their devices and use Strava just walking around in the woods and using the input from five or 10 people to slide to a trail that doesn't have a sign of a machine map. Like a Strava mapping party or something like that? Yeah, basically. No, I haven't really. Honestly, I presented this a few months ago and this is kind of like the next version of that. I've just kind of been working on my own. And part of why I want to present is just kind of get it out there, get people's ideas and feedback on what can happen. And also part of the next step is to market it in some sense. You know, the OSM community, I'm sure there's probably people that would be willing to use it and that know how to use it right, but they just don't even know. So how about using this against an aerial photo? It seems awful attempting to try to snap to high contrast edges in a photo. Have you tried doing that? I've not tried that. But yeah, conceptually anything that you can build this surface concept of of like, yeah, like this kind of like concept of like things that have higher value and lower value, you can apply it to. I tried using like a map scan and the data was a little noisy and it didn't work so well. I haven't quite given it up on it, but it was a little bit harder to do. But yeah, I mean, you want to try it. That'd be awesome. I might have missed this, but were you saying that it's going to be an ID or it is or is that going to be ever back in like the OSM ID editor? Yeah, so the beauty of ID is that you can fork it and add whatever you want to it. So I've forked it twice. One is to have the Strava version where you slide to the Strava data and then one is to the Tiger stuff. And so maybe it's best to combine those two. So it's a fork of ID. So I try to keep it like up to date with the development that's going on there. But yeah, it's not like officially on the website. Cool. Thank you. Thank you.
|
Importing new/updated geometry into large dataset like Open Street Map is tricky business. Features represented in both need to be detected and merged. Often times editors are asked to completely "retrace" over updated maps as automated methods are unreliable.While a 100% accurate merge is impossible, it is possible to auto create a best guess and let the user refine from there, eliminating as many manual, tedious steps as possible.Slide is a tool designed to solve this problem and works by iteratively refining roads, trails and other complex geometries to match another dataset, where the features are correctly mapped. In a single click one geometry is "slided" to the other, eliminating hundreds of tedious clicks.The form of the new dataset is flexible. It could be an updated representation of roads such as the new TIGER database, a scanned historical paper map, or a large collection of GPS data points like the 250+ billion made available by Strava, a fitness tracking website.Overall, Slide is designed to leverage what we already know, collected in various datasets, to speed map tracing. Map editors should be focusing on higher level challenges and not just retracing over another dataset.
|
10.5446/31695 (DOI)
|
My name is Richard Hinton and I'm here with New York Highland. We both come from the George Washington University in Washington DC and we've been for the last several years Incorporating a module or an assignment of over-street mapping with our GIS classes Some of you have been to State of the Map and we have seen our presentation on How we do that we're gonna talk about that a little bit But then since the State of the Map in April of May this year the last six months a lot has progressed in Sort of developing and how this can potentially be used in a much to a much broader audience You don't have to be geographers. Hopefully to even you know incorporate this if you want you can be in various sort of disciplines in public sectors To actually bring OSM into your class if you have an interest in doing that So I'm gonna start off start us off talking about What we've done a little bit and and then we'll hand over to New Level talk about more about where things are going and it's a pretty cool interesting stuff So I said we've been incorporating OSM as a classroom for the last few years It started with one class and one instructor and it's now grown to at least four to five classes each semester So in 2014 to the spring and fall semesters and our summer sessions By the time December comes this year will probably have about 250 students all together that have been contributing to OSM in this year When we do it obviously most people when they contribute OSM it's a voluntary basis and people simply do what they Pick a job they want to work on they contribute on their own time Obviously, we're bringing to the classroom. We have a more of a captive audience, right? They don't they're not they're selected volunteers. I guess they're being voluntold. This is what you're doing well the way we approach it is that we like to work with a partner and listed here are some of the partners or the partners we've worked with over the last few years and The collaborations have always worked for it worked very well until in America my cross USA the Geo Center there We've gotten support from HAU for the State Department by giving us imagery that we can use and trace from And then some of the projects included fine folks at the World Bank as well So it's worked fairly well to have this sort of collaboration with their students So what do we use OSM the classroom Obviously maps for most people here would think maps are very know that maps are very applicable regardless of disciplines more and more people in different disciplines are Realizing this we see that in our own classrooms Not only do we have geography students environmental studies students in our classrooms we appeal from the business school international affairs public health public policy These students in different disciplines are realizing that oh this digital mapping thing is kind of important and it can be really applicable to our work So OSM is a great way to start introducing people and getting different people involved with geography and mapping and So making them recognize that they can you know use this kind of tool to help them sort of really tell their story and understand So they're they're interested a little better With ours Sorry, I'm trying to see what's here Thank you There we are The way we do it say with With having a partner to really Help some engage in a sort of service learning aspect as well So not only are they just you know contributing to this larger project, but they're actually working with a partner and having a partner Really add some gravitas for the students so they realize and I'm not just doing this for myself or for a grade It's actually you know helping somebody in another country you know Just design their disaster response or their evacuation plans these kinds of things And it's also a nice sort of line item they put on the resume that they work on a project with the World Bank with USA with the Geo Center whatnot and Having this kind of skill especially in the open source role is certainly something that's remarkable to a lot of a lot of a Lot of employers now looking for some sort of open source capability in their skill set when they're hiring people on So we're sort of introducing them to this in this manner So when we want to bring something like this that's typically sort of developed pretty much a volunteer basis into the classrooms a lot of considerations a lot of things We have to think about So these are the things that we sort of typically need to go through or have been going through over the last couple years and To sort of make it a workable assignment and make sure it runs smoothly. Obviously we need to Make sure everybody has the same amount of work. Alright, so you're getting a grade for this Jimmy doesn't want to have to do twice to work that Sammy does right has to be fair But dealing with the real world and we're safe for tracing features Some locations are more densely populated Than others so you need to sort of identify Sometimes if some's are some areas may be more difficult to do so these are things we need to consider And how do we how do we divide up there? Do we divide it? Divide up into a series of grids and everybody gets a grid to do or do we say alright everybody has to do 200 features? And you have to do you know roads and buildings and whatnot But then because it's a this open collaborative platform We have to find a mechanism to prevent people from overlapping We don't do occasion of effort because obviously if I digitize a building and then Nula's next door and doesn't realize that I did Digitized it. She does the same one. It's going to give an error So these kind of things we have to sort of figure out to make things work smoothly We also need to track who did what obviously We need to be able to find a way very easily and quickly to identify all the information that Jimmy has created and Sally has created so we can mark it and Going through this way it also at the level of validation and quality control to the To the contributions that our students make to OSM So very quickly Our workflow that we've been working with it's changed a lot partly because we've gotten better and more used to To doing this as well the technology has changed as technology has changed. It's actually made things a lot easier for us So basically the first thing we need to do is find an area map if we're working with a partner like we have been in Basically the areas chosen for us essentially, but even still when you choose an area you need to obviously investigate it How much information has already been collected in in that area in open-street map? How many how much already exists? The imagery you're going to be using is it's partially cloudy. Is it hazy? You have good quality imagery these kinds of things We also need to set up an instance in tasking manager this came about I'm just gonna talk a lot more about that a few years ago to help people Collaborate more effectively essentially OSM is great for getting a bunch of people to work On data from anywhere in the world literally going to people in in country Really enabled us to open it up to Much broader audience initially when we started with one class who's an intermediate GIS class They already understood concepts of topology and dead and chasing and whatnot ID editor in the in browser editor with OSM Very easy and so our new students introduction students can now be included with this And now we have our intro classes and our intermediate advanced classes involved in this project And for grading we use Open source site open overpass turbo because there we can find specific features that any student has Has done so we can query using that website OSM to find exactly what has been done And Another big thing that we like to do is actually find a way to really engage the students Having our partner come in and talk to students usually right before the project starts really adds Really has some heft to it because they realize okay. This is not a paper that I did I'm in history class. I get it back. I throw it in the bottom drawer of my desk It's stuff. I go it's gonna go live to the world It actually matters because these people are working with people in Indonesia in Colombia wherever and they're actually using this information to you know For disaster preparedness, so it actually really matters So to get them up to speed we introduced them obviously to OSM in general we introduced them to The actual workflow what they have to do how to use the editor we tell them how we're assigning the information Obviously how much work they actually have to do and Then we actually get them to use overpass turbo to track the road work as well And then the last thing we like to do is actually have a mapping party because when it comes down to it Tracing and digitizing isn't real sexy work and when you have you know the prospect of doing 300 buildings or a piece of infrastructure It's like okay. This is going to get you know sort of tiresome fast So we have a host a mapping party we make it optional But we order a pizza bring in you know Coke and Sprite and ice cream and play music and just in a sort of even more collaborative Environment it also helps for students to work together So sometimes you have to zoom in and zoom out of imagery if you've ever done that to see this All right, where does the building actually end or where does the road really go? So sometimes a couple of pairs of eyes help you sort of figure out exactly what you need to do And help you sort of make a better product And with that I'm gonna hand over to Indola So As Richard said this started as a pretty small endeavor with me kind of fiddling around with different softwares to see how you could Comprehensively assign something like this to people without having overlap and people getting frustrated or parts of a city being left blank And it definitely has improved as the technology has gotten better. This was our first mapping party. It was a pretty Small affair. I think we maybe had anywhere from like a dozen to 15 students and this is where we're at now We typically do this on a Friday mid semester and over a course of four hours We will have over a hundred people come through and a lot of the other faculty in our department that are not GIS people They're not you know techie at all They're human geographers and they've gotten so into this they'll come and they'll take a training beforehand There's like one of our faculty dragged her husband in and we'll have this huge group just working together music pumping And a lot of the students actually knocked their assignment out During that mapping party because they're like I'd rather just come in work with my friends have a laugh You know eat a slice of pizza with my professor and we have bodies everywhere our conference room our lab a lot of professors open Their offices and let like people work at their desk. So it's a really Really good sense of community and our partners typically will actually come come visit When we do these and you'll see there's like a bunch of adults Running around so like the guys from USAID the guys from map box right across a lot of our friends that we work with in the area They'll just come in to be that you know Over-the-shoulder extra guidance for the students so that that's a real kind of a winner in terms of engaging the students So what the new stuff the stuff we wanted to talk about today that you know Is it just our project and how we do it is this notion of a teach OSM? so In the last few months a bunch of like-minded people got together and they said okay There's all these instructors out there that are starting to do Little OSM modules or assignments in their class different layers levels of complexity You know you have like basic geography classes like we're using them in but then you live people in other disciplines Some people are rolling it into cartography and then maybe taking the data and going further But we've all this wonderful information and in the spirit of open source should we not share the materials that we use to teach with other Instructors we're increasingly seeing again at our mapping parties some of our faculty will bring their friends who are faculty in other departments Into these parties and we've had several requests You know from let's say there was this one lady from public health and she's like I really want to do this in my you know Epidemiology class will you guys help me put it together? I'm really sure you can like have all our stuff and then we realized okay, so there's a lot of things that we do that We take for granted we've done this enough that we can just hit the ground running and set a project up for a hundred kids And inside of a week, but if you're not a geographer or a GIS person, you know, how are you going to get your head around this? So the idea for teach OSM was born to be a resource for instructors So that they could add collaborative mapping projects to their to their classroom So Richard and I are taking the first bash at putting the content together. We have a lot of it developed already So it's taking a lot of the materials that we developed to use in our class the stuff that we give to our students And then also some extra instructor resources and we hope to go live for Geography Awareness Week Which is around November 16th? We kind of hope that we'd be there by today, but we're not quite there, but we've we've some of it ready to go So what is going to be what's going to sit on teach OSM? So pretty much all the materials we hope that would be required to help an instructor Pick out the area identify and investigate the area that they're going to work in how they're going to divvy it up and assign it to their Students how they manage the project as it progresses and then also how you grade it, you know If you're from a different discipline public policy, have you ever graded topology? Probably not. Do you even know what it means? Probably not so Explaining those things in a very practical way because again You're working with students and we owe it to them for this to be equitable And we have to have a very kind of open way in which we grade it okay, so that they you know know why they get the grade that they do so and step-by-step design aids for the instructors so how they can customize their work the workflow that we developed How they can make that work for them, so we're kind of taking our workflow and maybe stripping it down We definitely have an interest in disaster Disaster response because that's my background emergency management Richard has also done a lot of work in the area So we're kind of biased towards those stories But that's not going to be for everybody you may have a high school teacher that wants her students to map their local neighborhood So it's going to be a lot more interactive it may involve walking papers and things like that An element that we have obviously not included in our so Different options for how you can build that assignment for your class again the grading Techniques that we use the rubrics and what we've discovered in our different kind of experiments over time We have found that actually recording demos interactive demos for our students to use and putting them on our teaching websites They're really handy because hey you might do a demo class and you know By next week Billy has kind of forgotten how to do it and it's much easier for him to maybe sit back and go through that little interactive video That we made then write a big long email and we're like well Would that be a good way to teach an instructor too so we can write out all these instructions? We can also show them how we go on to tasking manager set up a job how we Investigate an area to determine high low medium density show them how to trace and what we mean by good topology Making that visual it's an awful lot easier So that's a lot of the stuff that we're currently making right now And then a lot of the training materials that we use like we do a training class before our mapathons where we teach the students They're part of the workflow how it will all fit together why it's important so all that material again, we've taken that and kind of scrubbed out our preferences and We will put that on teach so a professor can take it and make it their own and reuse it So they don't have to reinvent the wheel for the assignments and then more instructional videos of us teaching students that they can use If they want to and then the one thing that we really want to have on there are case studies We have met a lot of people like Stephanie from San Francisco University, we know Robert out in Boulder, Colorado We've met some people from Tulane that have been doing this is a lot of people doing this and they're doing it differently And we want that reflected on teach OSM. So the core of teach will be Super basic stripped down for the first time then you can look at the case studies All these different people how they've applied it in different ways the local the remote working with a partner not working with a Repartner the idea of like when Stephanie did her she included blogs so students were were like constantly Documenting their experiences. They were working on this. This isn't something we've done But you know, it's a wonderful option and it would be great to have that there So how many people know tasking manager? Show of hands. Okay, nice few. So I take the rest of you kind of maybe have either totally not heard of it or heard of it in passing so Tasking manager OSM tasking manager and it was developed by the humanitarian Open-Street map team or hot and the tool was developed to coordinate collaborative mapping efforts specifically for disaster affected areas okay, and anybody can participate anybody can jump in and Be Participate in a task, but only administrators can actually set that job up Okay, so by setting a job up you you go in you create a little task to say hey We're mapping, you know for Ebola and in this country and then OSM users can go in and they can basically check out a slice of real estate and then go trace that in the open-street map environment This was a game changer for us when we scaled up this exercise because when I did this previously I was jumping over and back between QGIS and making fishnets and putting stuff up on websites And it was messy on my side to manage. This was an amazing Leap for us to be able to work with this and we're lucky because our partners are part of the community that use this so the Red Cross USAID So we were allowed create tasks and we were allowed work with them But you know this was created for a specific purpose and it's not appropriate to open this to all instructors You know high school teacher as I said wants to map a neighborhood with her her students That doesn't belong in humanitarian tasking managers. So our idea we approached OSM We said hey couldn't we get an educational one of these up and running okay, and we'll be the administrator We'll grant any professor that our teacher that wants to use it will be the ones that will be the gatekeepers And we can like time out tasks that people don't complete But it's just a wonderful way to teach and this makes it doable for non geography people So they were like okay, we can consider that so we got a little bit of seed money from our university in innovative teaching grant and And Pierre Gerard from camp to camp who works a lot on tasking manager He a lot of the things that we we wanted to see some Actually going a little further here. This is what one of the job looks like so previously You just saw little snapshots of each of the jobs and when you click on it This is what it looks like and students can select a little cell As you can see here and they check that out and then that automatically opens in an open street map editor window with ID And what we were like for teaching it would be awesome if we could like sit down and we can zoom in and out and we can Determine if a given cell is easy or difficult or hard We can assign you know three to Sammy give them three easy ones Give him two hard ones and then do the same for each other student. That was something that wasn't in there previously So we worked with those guys this year with our seed money and they added this capability So that you can the the instructor can go in and in very simple terms Tag cells as being like hard to do easy to do whatever And then also the ability to assign a particular cell to someone the way the original tasking was set up is people went in and they Self-chose when you're dealing with students, of course the smart ones that start early They'll go in and they'll pick the easy ones and the procrastinators the ones that you probably don't want doing the difficult stuff They get stuck with the difficult stuff and it affects quality. So again in the spirit of equity We wanted to be able to have this option to a to assign it So this was something that they added to this too The nice thing was a lot of the additions we wanted to see in tasking manager a lot of our friends the red cross USA ID World Bank those guys they were like I think those are good things to have for the regular tasking manager So they added a little bit of money and we got a lot of New updates on tasking. This is actually version two of tasking manager there was a big revamp this summer and In the next week or two a fork of this with all the new toys will be created and it will be teach tasking manager So it will look exactly the same, but the difference will be it's Where educators can go to make use of this and we're not kind of polluting the pool here with of the disaster community So it will look exactly the same And have the same functions with just a different crew and so That's well, I kind of got ahead of myself. That's what I talked about there But I really want to thank the folks that we've worked with that have made this possible Miguel Maron at open street map Priya Gerard from camp to camp he put a lot of work into this and Steven Johnson is working with us to get teach OSM going. He's put it up on GitHub We're very new to all this stuff. So I'm not probably using any of the jargon correctly But we want the development of the site to be open to we do feel though Putting having the first bash and putting it up there. It's much easier to get collaboration when there's something to critique So rather than just constantly having a conversation about what teach should look like we're going okay Here's a first pass have added people so we want people to use it to critique it to contribute to it Because teaching this stuff it looks different It will look different for everybody and we want that reflected in every which way and we don't want it to like belong to any one group or belong to any particular tier of education and I think that's pretty much it for us. We have a tendency to go over so we're trying to be very conservative today And I feel like this is a topic that feedback is the most important part. So anybody has any questions? So Oh, that's how okay, I was gonna say how do we join your community of practice for this because you can obviously join on GitHub there is a teach OSM group. Okay that obviously and or you can email us directly Okay, and then you say in November you have the toolbox ready. It's gonna be open November. Okay, great. Thanks teach OSM.org It's where you go. Okay, where it will be. It's just a placeholder right now Yeah, now after you've had this going for a couple of years in your university Have you seen any increased demand and other disciplines outside of GIS and geography for use of more sophisticated? sophisticated spatial analysis tools and You know for research and and or teaching in and our depart like in our department has grown considerably Over the time that we've been doing this when I say several years It's probably been like four because this stuff obviously wasn't around too much longer than that But definitely other faculty have become interested And what I like and what we this is something we spoke about yesterday in an educational panel We teach introductory all the introductory classes were like the gatekeepers for the any the geospatial learning and we teach our classes Our GIS classes to be software independent So we expose students to the proprietary as well as the open source stuff And we really feel like OSM is almost like a gateway drug to like suck people into the world of geography the new Geography and we've had a lot of interest from other departments. We've had a lot more departments collaborate with us on kind of traditional spatial analysis To but definitely and but I think making it accessible to those other departments not making it be You know, oh, you have to go learn, you know arc GIS and spatial statistics that there is a way There's that a way to do entry-level mapping in a meaningful way as part of your class I'm just wondering if you've noticed any change to the data around your campus or local community Or if you've stocked any of these student accounts to see if they've got addicted to this kind of mapping and taken to their local community or Done their own hot OSM stuff We haven't looked at any stats Because a lot of people pull stats on who's done what when Like when we've done our mapping parties like the guys at geocenter will actually pull a bunch of stuff and say hey We did like 15,000 edits within two weeks and just kind of stuff But just last this last semester The students some students got together and had hey when we have our own mapping society and specifically for humanitarian efforts So it's really sort of gonna get up and running this this semester And when we're hopefully working with like the red cross and actually start training them So they can actually start to work Maybe it's a bit of a backstop for when things go really crazy there But there is that I mean this is came from the students There is an interest in the student body do say hey We can you know do this on our own as well and this is gonna be like a student run body We're the advising faculty for it, but it is student run organization. They all the seats and then the committee They're all student based and it's you know and the university gets a little bit of money for them as well to You know help them help out with some other stuff But it's like it's getting up and running out But there is an interest and it's because it's from students will be a taught and then they you know Sometimes bring friends and say hey look what I'm doing. It's kind of cool. So hopefully it'll grow And part of the idea of the society called it's called HMS GW is that it's the core students will have a good solid GIS background They've been through many of these OSM things with us They will bring in other students and they will train other students. So it's not always people that come through our classes They're trying to student teaching student that kind of a thing and DC is quite well mapped because DC GIS contributed a lot of data and I think we kind of we tend to our Students kind of get it get the bug of the international disaster response thing. So a lot of our students Kind of tend to work at that type of stuff But you know if there's a big event red cross and call us and say hey Can you do a mapping party and we'll like do impromptu events and we'll have people who've done this like three years ago show up Bring their roommate and you know, so there's definitely that sense of community in our department about this So this is the coolest thing that I've seen at this conference. Thank you and from my experience It's pretty easy to organize mapping parties with students. They just itself the idea sells itself Especially if you feed them for a classroom environment I'm curious if you find there's a student who's done some quality work. Maybe he doesn't really care very much How do you approach that you roll back their edits or do you go in and redo it? Like what's your mitigation? strategy for that You know like for students that don't Don't do good work. You mean? Yeah, as we as we grade it we go in and we actually look for okay, we're Jimmy's All of us edit so we actually see them come up. We actually physically go in and visually check them when we see things that are very Wrong I have one student build a building right over a road. I'm like really there's not a road going to disguise house I'm pretty sure So I you know I basically go in and change those yeah, we edit them to fix it as we go along right and we threaten them Yeah, you're like, you know puppies will die here You're don't screw this up and it's actually funny. You'll have some of them will do a crappy job of some of their other assignments But they really have taken to this and and I think as we've made our kind of teaching materials better The quality of their work has improved significantly for the first time this year We had this assignment as part of an online class and I was terrified if I'm not standing over them yelling at them How the hell are they gonna do this right and they did they did a fantastic job the amount of stuff We have to fix is minimal which really has surprised me I think one of the biggest things is sometimes they forget to tag But that's like an easy thing for us to spot and fix and over past herbal allows you to query by students So you can just zone in on each one by one and you can take their work to task and that's really helped I mean there's obviously a variety of quality and some some people are very particular and digitized every little look and cranny And others are a little more general but as long as the basic infrastructure is there if they forget the tag easy fix I mean not having the basic infrastructure is why we're doing this I and so people in country can now have that and work with it. I Was curious how you guys chose your partners I think it started with first ones America my cross because the guy there actually was a student of mine. That's how it started actually with Robert That's it's funny. I really had to think about that if these guys have been our friends for a couple of years now So we forget how we got introduced to who but it's kind of a bit of a buddy network It doesn't hurt that we're all within like five blocks of each other in DC We've now well one with these guys at Red Cross kind of become drinking buddies, too So that's kind of that's helped not during the mapping party. That's that's after But yeah, that's actually how it started a student of mine had got a job at the American Red Cross And and when he heard I was doing this project he thought it was kind of fun And he's like oh could we make this work together and it's kind of gone from a very loose collaboration to like a You know big honking projects that we're thinking about six months out Okay, all right, I think that's it right of time. Thanks guys
|
For the past three years Nuala Cowan & Richard Hinton of the Geography department at the George Washington University have integrated the open source mapping platform, OpenStreetMap into the curriculum for their introductory undergraduate Geographical Information Systems (GIS) & Cartography classes; traditionally the domain of desktop, proprietary software. Professors Cowan and Hinton have sought to expand the traditional curriculum, and expose students to various different open source software's, web based platforms, and data collection initiatives, specifically in a service-learning environment.In collaboration with both local & international partners (American Red Cross 2012, USAID 2014), GW Geography students have used high-resolution satellite imagery to trace road and building infrastructure (Columbia & Indonesia 2012, Kathmandu 2013, Philippines & Zimbabwe 2014), data that is subsequently used to support disaster preparedness efforts.Initiated by a small innovative teaching grant we have started work with OpenStreetMap foundation to develop a web site that would allow other instructors to replicate our mapping assignment specific to their particular discipline and curricular needs. This site is called TeachOSM.org. Our funding has since been matched by the World Bank, USAID (OTI and The Geocenter), the State Department and The American Red Cross. With this funding the scope of the project has been expanded to include the redevelopment of the OSM Tasking manager. The OSM Tasking Manager is a custom-mapping tool that facilitates collaborative mapping projects with a humanitarian focus. The purpose of the tool is to divide a mapping job into individual smaller tasks for group work, while guaranteeing coverage and minimizing overlap. New additions to the Tasking Manager will allow instructors to assign cells to individual students for both data creation, and data validation roles.Mapping has applicability across many fields and communities of interest, and can used to document, archive, plan and contribute to both local and international initiatives.Open source mapping modules and assignments are also a unique way to integrate service-learning strategies in course curriculum, while exposing students to new and exciting technological platforms. The experience teaches civic responsibility and the value of collaborative efforts in the global community.The collaborative mapping initiatives at GWU Geography have been exclusively disaster related to date, as this coincides with the research interests of the faculty involved. We believe this instructional module/assignment is applicable to many disciplines and teaching scenarios, and the objective of the TeachOSM platform is to open that possibility to these other fields, in a comprehensive user friendly way.
|
10.5446/31697 (DOI)
|
Hello, everyone. My talk is about QG server. I have two talks. First talk is about QG server and next talk is about QG desktop. It's a little bit misleading in the printer program. I'm sorry. First of all, my company, my name is Permen Calder. I work for SourcePoll. We are QG's developers located in Switzerland and we are providing support and services for QG's. And we also have QG's cloud, which is a publication platform for QG's projects. What is QG server? Most of you should know who has already worked with QG server. And who knows QG server? Okay. Not so many or many do not know QG server, so I have to explain it. You can think of QG server like Geoman map server. So instead of taking a map file, it takes a QG's desktop project. So on that graphics, you work with QG's desktop, save your project as a project file. And this, exactly this project file is the input of QG's server, which delivers maps as a WMS, which looks exactly the same as on the desktop. Because it's using the same QG's core library and QG's styling and symbolization. So it looks exactly the same with some additional functionality, which I talk about. QG server has an interesting history. It started as a separate project. It didn't start within the QG's project. It started as a research project at ETH Zurich in 2006. And a few years ago, we were contacted by the city of Uster to support QG server to change it, the concept to this project file approach. Because before, it was configured by SLD. So it was really a separate styling language. And now there is still quite some SLD functionality, a little bit hidden, but it is still there. But now it's using the same project file. And this was done in 2011, I think. And then it became official part of the QG's project. And then it's, we also got community contributions, like the WFS support and WCS support by René Liqueton. And what does it offer? It offers OGC services. The most important one is the WMS, which delivers maps. It has the standard service functions, get capabilities, map and feature info, legend graphics. And then it has two additional services, which is get print, which delivers the PDF printouts based on the print templates made with QG's desktop. And it has an extended capabilities, which is called get project settings, which has more information about the project in it. Then it has also built in web feature service, so you can activate WFS functionality in QG's desktop and the OWS service tab. And you can even do transactional web feature service, which means you can write via QG server to the database with WFST. And fairly new is the web coverage service, which delivers raster data out of QG server. For printing, we have additional parameters. The most important one is DPI. And as I said, all this information comes out of the print templates made with QG's desktop. This is a screenshot. You see QG's web client in the background, a selection of print templates on the left side. And down there is the PDF, which is the same as you get in QG's desktop. And you don't need any additional printing software on server side. There are more extensions to the OTC standards. We have server side feature selection. We have filters and server side search. We can set transparency on layer level. We can evaluate expressions in feature info and more. What's coming next? Already implemented is support for QR codes. We use that. We have customers who want to print the map and have a URL to the online map with exactly the same extents and the same layers and so on. And we code that into the QR code and include that in the print template. So that will be the next QG's version. And we have also extensions that you can add user text in the web interface and pass that to the printer and include that in the PDF. And also important that you can highlight features via parameters in the print out. So that's the server part. And the next thing is the client part. You're free to use any client who understands WMS. So you can do your own client like an open layers client. But QG's web client is a kind of a reference implementation because it supports all these extensions which QG server has. So it's a good starting point. It is based on open layers 2 and GX for the UI. And as I said, it supports this WM extension of QG server and was still opened by Andreas Neumann but has now quite some input from the community. That's another screenshot how it looks like. It's like a classical web GIS client on the left side, the layer tree and all the tools you know. And nice print out. It has search functionality with different possibilities. You can search on GeoNames, online, the API that's built in. You can also use external database tables for searching. Then you need the additional server software for doing the search. Without additional software, you can search directly with QG server with these extensions I mentioned which are in get feature info and get map. Get map extension are for selecting the search result. What happened recently to QG's web client, it was translated into 14 languages. It has a vector export using OGR. It now supports other open layers layer types like Google Bing OpenStreet map. It displays pictures and web links. It has like a topic catalog and you can embed multiple maps. And what's coming is the roster export which is already implemented and maybe in the meantime included in the version on GitHub. Next another client which is usable with QG server is the OL3 mobile viewer which is open layer three based client specialized for mobile devices. It uses jQuery mobile. It has a JSON configuration file. It cannot only be used with QG server but also with mapfish app server which has your main map server as a backend. It is in production since almost a year and also available on GitHub. Some features, it has topic and layer selection. In a mobile style, it has a search functionality feature info. You can tap on the map. It has an automatic position update and you can turn the map according to the compass and orientation of the device. I have a short demo of it. I go back. I started again. So here you see my fingers display points. I zoom out. I zoom in. On the left side is the layer menu or the map selection. Now I have changed the base map. Now I have tapped on the feature. You see the feature info which is HTML. Next thing is doing a search. I search for an address here. The map zooms to that point. I switch again. It is an aerial image. Now I think I turn on the... Now I am turning the map. That is an OpenDF3 feature. You see the compass turning. Now I press on the compass on the downside right and it goes back to north. Now I have pressed the locate me feature. Now I enable automatic updating the map. It is turning according to the device direction. That is about it. We did not see the layer menu but we saw most of the functionality here. Next step, first we had QG server which is really delivering maps. Then we had clients. Next thing is how do I publish my QG project. I can do that manually. I can copy all files to the server where QG server is running. There are also QG publishing platforms. Here is a selection. We have QG cloud which I will talk about. There is also this map, which do similar things. To explain what they do or what they add to QG server, QG server down there does really deliver WMS. What we need on the left side is API for uploading files together with graphics, like special SVGs or bitmaps which are in the print template. Then we have the viewers, like the mobile viewer. We have also we deliver the OTC services for direct use and put an access control on top of that. That is what more or less this publication platforms do. To show you that in detail, take QG cloud as an example. You can try that for free and publish your maps there. It is a special infrastructure. It has a geodatabase. It delivers the services directly, WMS, WFS, WFST. It has this publication API and it has map viewers. That is really the easiest way for publishing a map with QG. It has options. You can have map with limited access or you can have customized viewers with your logo and so on. You can have searches based on database tables. New and will be available in the next weeks for public download is the same infrastructure as a private cloud virtual machine which you can install on your own server. You have this publication infrastructure in your house. If you are interested in that, then follow on our Twitter addresses or that is the easiest way, I think. Or look at QG cloud.com. What does it support? It has QG's web client as a standard viewer. It supports all open layers based background layers which means you use the open layers plug-in within QG's. You add there an OSM or a Google Maps layer and you have it automatically within QG's cloud. Same for printing. If you create print templates in QG's desktop, they are automatically available in the web client. The web mobile viewer I just showed you is also available as a viewer built in into QG's cloud. All that together gives you QG Suite which is QG's desktop, QG server and a publication platform like QG's cloud. You then really have all you need for doing GIS and publication on the web. Thank you. And I'm open for questions. Hello. You made one of your slides had an access control layer there. Do you have this on any examples like QG's reference viewer or something like that? Yes. This access layer is built into QG's cloud, for instance. That's included in the viewers which means you don't have to log in for, have a look at certain maps. So we have protected maps on QG's cloud and only a limited list of people have access to that. So you have to log in on QG's cloud and then you can see the maps. We see the same viewers as the public maps. You mentioned vector and raster export from, how does it work? Is it on client side or in the server? No, no, it's on server side and it's using OGR which needs additional installation for doing that. But it's supported in QG's web client calling especially URL which delivers the rosters via OGR on the server side. It's a part of the server. So it needs both sides, server support and QG's web client. What are the system requirements for the QG's server? What needs to be in place? Yeah, good question. I forgot to say that at the beginning. It is FCGI module so you need a web server to run it. That's quite easy. Could be Apache or EngineX as we heard in the morning. And system requirements, it doesn't need, it depends on your user base, how many users you have and how big your maps are. But it's really, if you have four gigabyte RAM, that's enough for most use cases. Is it possible to upload raster data to the server? QG's server? No. Sorry, but QG is cloud. Yeah. And QG is cloud doesn't directly support. It has a plugin which helps you to upload data. But this is only for vector data. And if you want to upload raster data, you have to use PostJS raster functionality which is possible because you have full PostJS access. But for now, you don't have help from the plugin. So you have to upload the rasters into PostJS with regular PostJS raster tools. Okay. Thank you. Thank you. And.
|
QGIS server continues to grow with an active community and expanding user base. Besides new styling features shared with QGIS desktop, new services have been added continuously. OGC WFS (Web Feature Service) and WFS-T were the first additions to WMS, recently followed by an OGC WCS (Web Coverage Service) implementation. The map service itself also got several additions besides the GetPrint request for delivering PDF outputs made with QGIS print composer. Performance and scalabilty has been steadily improved and brought to the same level as other established map servers. PDF outputs got recently support for dynamic texts and images (e.g. QR codes) and server-side GeoJSON rendering allowing redlining implementations.
|
10.5446/31699 (DOI)
|
So my name is Chris Tony. I work for the US Forest Service in the Rocky Mountain Research Station And I'll be talking about forest disturbance mapping using Landsat time series I'd like to acknowledge my co-authors Gretchen Moyes and Karen slave ice and Todd Schroeder who are responsible for a lot of the work I'm talking about and I'd also like to point out that this work is part of a larger project With other research teams contributing so I'll have some citations throughout the presentation to acknowledge their work as well this work is part of a Larger project called North American Forest Dynamics. It's funded by NASA And is a collaboration among NASA University of Maryland the research and the research branch of the US Forest Service It's designed to characterize patterns and recovery rates of forests across the continent With a goal of determining the role of forest dynamics in North American carbon balance So one part of North American Forest Dynamics was to map the Recent disturbance history for the determinist United States Using Landsat time series since 1984 That was done with a change detection algorithm called vegetation change tracker developed at University University of Maryland, so I'll start with just a brief overview of the the vegetation change tracker VCT products which indicate the presence of Disturbance in 30 meter pixels for each year during the time series But VCT does not provide information on the causes of disturbance So the focus of the team I've been working with is to determine causal agents of disturbance So things like harvest wildfire insect and disease outbreaks Using the VCT products as a starting point This is done with predictive modeling at the pixel level and I'll describe the production of raster products for conus using open source and I'll describe the software implementation in NASA's high performance computing environment So I'll start with the end products from VCT and then the next slide I'll show the algorithm but The products include a land water mass so every pixel is classified as forest non forest or water And then there are annual raster layers where each pixel indicates presence or absence of disturbance within the forest mask for each year And then there are some disturbance Magnitude metrics associated with the disturbed pixels so this map on the screen is Shows all the years overlaid are composited with each other so the the green color Indicates areas where there was no change detected during the period and then the the other colors indicate a year of disturbance So this has been produced by the team at University of Maryland for conus The algorithm and the products have been described in detail in the literature It's an automated process the input is a Landsat time series stack there's Step for image selection To get images from peak growing season and minimal cloud cover and there also steps in there for image compositing from multiple dates in cases where cloud contamination is excessive And then the selected images for each year undergo some cloud and shadow masking and Then four different spectral indices are calculated from the original Landsat bands so things like NDVI are used and there's a Custom index in there that was developed specifically for the algorithm these spectral indices are used in time series analysis to Detect shifts in the spectral trajectory that indicate a forest disturbance So with the time series analysis DCT is looking across the spectral trajectory and trying to find Some kind of shift that's consistent with a disturbance event so in the example on the right is for a Harvest followed by rapid regeneration so there's this sudden shift in the in the spectral signature when the trees are removed and then there's that's followed by rapid recovery over just a few years Back to something that looks like the original forest cover So the With different disturbance types and regeneration dynamics the the pattern or the shape of this trajectory could take several different forms So I'll come back to that idea in a bit, but I just want to point out that VCT does not characterize the The pattern the shape of the trajectory It's just looking across the trajectory and identifying shifts that are consistent with forest disturbance Okay, sorry, so this is an overview of the current work to assign causal agents of disturbance to Pixels based on empirical modeling so it starts with a set of training data These are training samples from locations with known disturbance types The training samples are associated with a set of predictor variables. So things like spectral values and other predictors And then predictive modeling is done with random forest, which is a machine learning technique based on classification and regression trees So once these once the random forest models are developed they can be used to Predict 30 meter pixels across the country as long as we have Raster datasets for all of the predictor variables in the model So I'll go into a few details, but the next couple of slides I'm going to give just a little overview of our software and computing environment Open source was used for all the processing We use the GDAL utility programs extensively in particular we Did some work with the polygon enumeration algorithm in GDAL and the GDAL polygonized command line script I'll talk a little bit about that The combination of Python with GDAL and NumPy was kind of a workhorse approach to a variety of a variety of different processing Some things were done in C++ our research organization is very R-centric for Statistical modeling so we relied on R quite a bit And with the RGDAL package We're able to do to do raster processing directly within R which is really convenient for some modeling or computation that depends on other R code or is already implemented in our packages and Then the snow and snowfall packages for R allow us to do parallel computation and we also use QGIS quite a bit for visualization and and data checking Most processing was done by Landsat path row. So there are 434 Path rows that cover conus Each path row has approximately 46 million total pixels Path rows that are in areas with a lot of forest can have 20 to 30 meter forest pixels in a path row Some path rows have much fewer but Across all the path rows in conus 4.3 billion pixels have been classified as forest So that that's not an aerial estimate because the path rows overlap But that's what we had from a data processing perspective All the processing was done in NASA's high performance computing environment using a cluster called Plades Plades has 11176 compute nodes There are three different node types that have either 12 16 or 20 CPU cores per node So it's a total of just under 185,000 CPU cores has lots of disk storage and then The all of the open source software that we needed is well supported there Okay, so I'll step through a few details of the workflow starting with generation of the predictor data So we we're gonna use the spectral data data obviously as predictors, but we also wanted to derive Information about the spectral trajectories through time and make use of that so I'll I'll talk a little bit about that we also looked at the Geometric attributes of disturbance patches and I'll go into that And then there's other ancillary data for predictors that already exists like terrain a forest type classification and a burn severity product for wildland fires so DCT tells us which pixels have been Disturbed but it doesn't give us the cause of disturbance, but we do know that different types of disturbance look different on the landscape So for example harvest tend to be confined to a certain Range of size and they don't get really large and they tend to be fairly simple shapes that are often kind of squarish Where it's fire wildland fire on the other hand, maybe small but can get very large And often has complex geometric geometric patterns so the The geometry of disturbance patches may be a helpful predictor of causality So the VCT products are raster base, so we went through each of the annual disturbance layers and delineated polygons as regions of connected pixels For this we used And then generated vector data for each year and for this we use the GDL polygonized utility and eight Connectedness mode so eight connectedness just meaning that the pixels can be connected along the edges or at the corners and then we had a simple program to go through that vector data and Calculate geometric attributes of each polygon, so we calculated area perimeter the shape index and the fractal dimension index these two indices are similar and they describe the complexity of the shape based on area to perimeter ratios, so they basically Normalize to a square as the simplest possible shape And then we use GDL rasterized to generate raster versions of all the polygon attributes to use in in modeling and Across all the pathores for conus we delineated a little more than 210 million polygons So for another class of predictor variables, we wanted to try to derive Information about the spectral trajectories through time so this This was based on work by Mary Mary Meyer in shape restricted regression, so it's a non-parametric curve fitting technique where the Curve is constrained to fit a certain shape and In conjunction with this work on the Landsat time series and our package called cone project was developed to implement the computations We're currently working with seven predefined shapes That that are believed to indicate some kind of underlying forest dynamics So the algorithm is to fit Each of these shapes to the temporal trajectories at each pixel That involved that's uses iterative curve fitting for the non-parametric shapes And then we choose the best fit shape based on an information criterion That includes a penalty for model complexity, so it's handling overfitting by the the more complex shapes And then the the outputs from this are written to raster format, so we We get a the shape classification itself the selected shape and Then the curve for that allows us to derive some some associated parameters such as the the year of the shift Magnitude and duration of shifts and then change rates Before and after disturbance. We can also write out the fitted values of from the curve Which we were doing for future use but the shape itself the classification and these parameters derived from it are then used as predictive variables in the disturbance agent modeling So it's very CPU intensive there's been some work to optimize the shape fitting algorithms in our which Are implemented in C++ So there was a good bit of improvement there, but the it's inherently CPU intensive because It involves iterative curve fitting for the non-parametric shapes, and then we have to fit each of the shapes To to the trajectories at each pixel So To implement this for conus we have to fit 4.3 billion trajectories So our approach has been to do parallel computation So in these graphs, I'm showing the runtime in Hours on the y-axis versus the number of CPUs on the x-axis In this example is for path 12 row 28, which has 30 million forest pixels And it takes a week to run this on one path row Running it sequentially on one CPU So again the cluster has three different node types with 12 to 20 CPUs each So we can get that down to about just under 10 hours on a 20 CPU node or about 16 hours on a 12 CPU node So it scales up really well to multiple CPUs and out in this range of 10 to 20 CPUs we're still getting good good efficiency in the processing So we implemented this with R and Gdoll we So we're running four different versions of this for each path row for the four different spectral indices So we assign a path row for a given index to one compute node So for all 434 path rows we need 1,736 total nodes And then within those path rows on a node we split the data and do the computations in parallel And we use snow and snowfall for that and then the curve fitting relies on comb project if we use the 12 CPU nodes on Pallades in Processing all 4.3 billion forest pixels Would use a total of just under 21,000 CPUs and That would have about a 16 hour runtime if we were able to set it up in a single job We've done this twice now for all the path rows and there is some wait time in the in the job queue because the system is busy But we've been able to turn this around and under a week maybe Four or five days if it's not too busy So again, this is it's a it's an intermediate product for us But it's kind of a new product and we're really optimistic about the potential for it and It works so we can we can do it, but I'm not sure you know how we could do this without this kind of computing resource So that's all say about the predictors I won't say too much about the model development other than it uses training data collected in the project using a photo interpretation technique Training data is limited. We'd like to have a lot more We've supplemented that with some data from land fire, which is another national remote sensing program and from FIA Which is a national forest inventory program The modeling rely uses the random forest package in our and also makes use of the model map package Developed by Liz Freeman also in the Rocky Mountain Research Station and model map has some nice features for working with Random forest in a geospatial context and it has extensive model diagnostic associated with it And then the prediction code is very similar to the shape fitting code. I just described uses based on R and G doll it mass to the VCT Predictions are done with in our with the random forest package Certain predictor variables are generated at runtime There's an option for parallel processing within path rows, but it's not really needed because the processing time has Been a little under an hour For a path row running on one CPU And we can run all 434 path rows in parallel So the map product Is Has predicted values of disturbance agents in 30 meter pixels Again, it's mass to VCT non forest and water So far the Assessment of the product has been based strictly on the random forest model diagnostics and visual assessment. There's just not enough Training data to be able to set aside any any data for an independent validation set So there's some work on going to generate additional ground truth data and As far as data distribution, it's being done through the distributed archive center at Oak Ridge National Laboratory the vegetation change tracker disturbance years and magnitude products are Being prepared for data distribution now They will be available in the near future on this site as far as the predicted causal agents and and associated products there They're still being refined and undergoing quality assessment So some those products will be available in some form But exactly what we distribute and the timing of that is still being determined So that's all I have I'd be happy to take any questions if there's time As interesting talk I have a lot of questions, but I'll try to limit myself to two One is did you try to do any disaggregation by age class? And then the second one is did you in any way attempt to identify? man-made disturbances that are irreversible such as conversion of forest into a developed class The answer to the second question is yes conversion is a causal agent class in the in the national legend So we are trying to get at that. I didn't have an example to show but There's there is training data for it and it's being included in the modeling And the shape the shapes of the spectral trajectories are probably pretty helpful in getting at that As far as age class, no, we have not been incorporating age class. That's a good idea We would need You know, we'd need national data for it Which may or may not be possible, but definitely something to consider. It's a good point How deep is your image stack and then second question How much does it cost to run this on on the pleates cluster? The image stack is 1984 through 2012 one image Per year so an annual stack earlier versions of VCT were biennial stacks But we're currently working with all the University of Maryland teams latest products were based on annual stacks for those years, I think that's 29 years and The cost I have no idea we don't get charged for it right now It's a collaboration with NASA and there's some interest in this as sort of a demonstration product to do this type of Processing with the Landsat data So so far it's just strictly collaboration and and there's no charge for it at least in this this kind of this Isolated case normally, you know, there's a whole job scheduling and you get billed and there's accounting for it But we luckily are not having to deal with that so far. Thanks So with your predicted map of causal agents Do you have any sense of do you have products that kind of quantify the uncertainty around those predictions? Does that imagine there would be along the edges predict particularly there would be Cases where you weren't certain exactly what the causal agent was do you have products to demonstrate that? You know, we're looking at that a little bit the model map package that I mentioned It does some of that and so with ran if you're familiar with random forest, you know, it's an ensemble of trees as a It takes a majority vote from a large number of trees like 500 Decision trees so, you know model map has some Functions in there where it looks at the variation in those predictions from the individual trees to kind of get at the uncertainty of the prediction so we're doing that a little bit in and We have a set of test scenes that we we've selected to test things on We're not producing the any of the uncertainty measures for the nationally for all the pathos But we are looking at that a little bit on those test scenes and trying to include that in some of the development and refinement Great. Thank you. And second question. Do you are you aggregating this data back to those shapes that you pulled out of the VCL? In other words, are you saying are you attaching an attribute to those polygons? This says this was a fire or this was a harvest So that you have a vector data set that represents disturbances We have not yet, but the idea is to get there. Yeah once we We we have the raster disturbance agent product at a point We feel we're ready to move forward with that's definitely a next step is to make use of those polygons we've enumerated to To do that and you know one of the questions too was we're predicting at the pixel level So how much speckle is there going to be you know, are we going to get so far? It's not so far the raster output looks like polygons So but those polygons that we have enumerated are definitely going to be helpful in Assessing that variability of the predictions within the polygons and then hopefully labeling those polygons That's that's the next step. Good point. Thank you. Thank you very much
|
The North American Forest Dynamics (NAFD) project is completing nationwide processing of historic Landsat data to provide annual, wall-to-wall analysis of US disturbance history over nearly the last three decades. Because understanding the causes of disturbance (e.g., harvest, fire, stress) is important to quantifying carbon dynamics, work was conducted to attribute causal agents to the nationwide change maps. This case study describes the production of disturbance agent maps at 30-m resolution across 434 Landsat path/rows covering the conterminous US. Geoprocessing was based entirely on open source software implemented at the NASA Advanced Supercomputing facility. Several classes of predictor variables were developed and tested for their contribution to classification models. Predictors included the geometric attributes of disturbance patches, spectral indices, topographic metrics, and vegetation types. New techniques based on shape-restricted splines were developed to classify patterns of spectral signature across Landsat time series, comprising another class of predictor variables. Geospatial Data Abstraction Library (GDAL) and the R statistical software were used extensively in all phases of data preparation, model development, prediction, and post-processing. Parallel processing on the Pleiades supercomputer accommodated CPU-intensive tasks on large data volumes. Here we present our methods and resultant 30-m resolution maps of forest disturbance and causes for the conterminous US, 1985 Ð 2011. We also discuss the computing approach and performance, along with some enhancements and additions to open source geospatial packages that have resulted.
|
10.5446/31701 (DOI)
|
Okay, thanks for coming everyone. I'm Pete Spatti with Ubersense and I'm going to talk a bit about what we're doing with sort of large scale web and mobile enterprise applications. I'll just, to begin with, talk a little bit about the background on our customers and the software stack we're using and just very briefly touch on business models that came up in a couple of other presentations. But the two big bits that I'm talking about really are simplicity, which is a theme we've heard about in a few other presentations, and then field applications online and in particular offline. So those last two are the kind of main chunks that I'll talk about. And I think we'll risk a live demo, which is always fun with conference Wi-Fi. But so our customers really are large utilities and telecom customers in general at the bigger end of the scale. Basically, a lot of folks in our company used to work at a company called Small World, and I'll come back to that in a moment that was in that space. These customers tend to have very dense vector maps, these kind of complex network maps, and large data volumes they have up to five, six, seven million customers they handle. In some cases quite complex telecom networks, things like this where you have cross sections of networks, a lot of fairly complicated things to handle, even going down to inside plants in some of the telecom applications where you're modeling things like this or within the GIS. And so the three big traditional players in that space are these three, GEE Small World, Intograph and Esri, all traditional closed source GIS products, sophisticated but kind of complex too. In previous lives I was CTO at Small World and at Intograph, so I know this space well, I've worked around this space a lot. So say, you know, incumbents, large complex sophisticated systems. But then on the other side, on the other end of the spectrum, you have location being pervasive and simple in all of these consumer applications now. So our big focus the last few years here has been to say what can we do to take all of these ideas and technologies from the consumer space and apply them to these, you know, probably some of the most complex enterprise applications, but really take that data and make it available much more simply to lots of users. So yeah, just an example of keeping it simple there. So I'll just briefly mention our software stack and how we got to where we are. So when we started three, four years ago, we did our first prototype, I used the product called Arc2Earth, which as the name suggests was focused on storing Esri data in the cloud and integrating with Google. That ran on Google App Engine and we used Google Maps on the front end. And basically, you know, with our customer base, they're really very resistant to the cloud. These large utilities and telecom customers are, you know, very conservative ITWs and they just said, we love the concept of what you're doing, but we don't want it to be in the cloud. And so we transitioned our back end to PostGIS and MapFish. We're not really using a lot of the features of MapFish. It's quite a rich product, but we pretty much use that just for a REST API in front of PostGIS. So, you know, there's possibility we might use some other things in future there, but it's worked well for us up to this point. So we developed most of our application on Google Maps. And then the next iteration came when we had a lot of customers wanting to do offline things, but Google is basically focused on online, that their terms of use, specifically prohibit you using either the JavaScript or the data offline. And so that was the primary driver for us to switch to leaflet. So basically our whole software stack, enabling software stack is open source right now. We still use Google data a lot and we use Google Street View. We like that, but because of the offline apps, we basically moved over to leaflet. And so as you can see on the left-hand side, we can use multiple data sources, whether it's Google, OpenStreetMap, whatever, as the background maps. So that's just a little background of how we kind of got to where we got. And I thought I'd just mention a quick show of hands actually. I mean, who predominantly uses open source for their geospatial stuff here? And who uses a significant bit of non-open source? Okay, so a fair mix. So I think it's just worth saying this. I mean, to many of you, this may be obvious or preaching to the converted, but I don't have any sort of innate predisposition to open source or closed source. I mean, increasingly the more I work with open source, the more good things I like about it. But if a closed source system is going to do the job better for me, then I'm going to choose that other factors permitting. But so, you know, we didn't come into this saying, hey, we want to use open source, but just through a combination of the factors on this slide, you know, we've gradually, over time, moved more and more towards an open source stack because it meets these needs best. And I went towards this in great detail, but, you know, clearly you need the right functionality, otherwise none of the rest matters. It's interesting how many people downplay the cost, but to me it's still important that it's free. Hey, I mean, don't understate that. I mean, the software licenses are free. Again, as other people say, it doesn't mean the whole implementation is free. But for quite a lot of reasons, especially when you're working with, you know, large numbers of customers out there, you want to scale up your servers and you don't have to get additional license costs, that kind of thing. That's important. Support our experience in the open source world has been great. By and large, we haven't needed it. The one time we had, we threw a load of new GeoServer things into production, helping recover from Hurricane Sandy. And we were getting crashes a few times a day and we couldn't figure out why. And we got an overnight fix from Andrea, A.M. in Italy, which is something you'd be really hard pressed to get from most of the big Geospatial vendors. And obviously, mileage may vary, but that was the one time we needed a critical fix, and we got it super quickly. Terms is another important thing, like I mentioned. The big driver was being able to work offline and we were able to do that with Leaflet and OpenStreetMap. We couldn't do that with Google. So in a lot of cases, there's more flexibility there. And predictability, you know, the thing is with a lot of commercial vendors, you don't know if they're going to discontinue a product or change terms, change pricing, that kind of thing. With open source, you sort of have a long-term commitment where, you know, even if something kind of happens with people moving off the project or whatever, worst case, you've got the source code. And in general, if it's a dynamic community, you know, there's much more guarantee that it's going to be there on the same terms and conditions tomorrow. And then I just very briefly mentioned business models. So this came up in a couple of other presentations. So I think if you're working with open source, you know, there's multiple ways to make money and we all need to feed the family somehow. You know, there's companies like Boundless who sell support and services, the software is free, Cartody, Bein, Mapbox sell hosting. But then there's a number of companies where I say if you like, they have hybrid closed and open source. So they're actually selling quite a lot of products that are closed source, but they're still quite engaged with the open source community. You know, Google is one good example. They're primary products, a closed source, but they really do a lot of work in the open source community. They sponsor a lot of open source initiatives. They've released a lot of open source projects. Safe Software I was talking to earlier. They're similar on a much smaller scale, but you know, they use quite a bit of open source software and contribute to that. And we're basically the same. So we sell the products that we're showing you here to the large utilities and so on, but you know, we're quite engaged with the open source world in multiple ways and giving back in different ways. And I'm hoping that over time we might possibly even open source some of the components of what we're doing. So anyway, so let's get on to the main topics that was in many ways preamble and miscellaneous things. So simplicity is one of the first things I wanted to talk about. So this is just say this kind of general aim of what we talked about. So I'll just give you a quick demonstration of what we're doing and try and talk through some of the things we're trying to do to simplify, say what are really some of the most complex sets of data and applications behind the scenes. So this is our main app, as I mentioned, using leaflet, using post-GIS on the back end. So we come in here, that's the wrong place. So another thing I'll show you here is, if I can see the screen here, it's got a lot smaller. So I'll show you the search bar more in a moment, but you know, we can click on anything on the map. One thing I'm big on is not having these separate info modes to click and that kind of thing. We can look at Google Street View. One thing that's kind of nice, a lot of people aren't aware of is in the Street View API, you can embed things in there. So we can look at poll numbers, you know, we can look down the street, see the next poll numbers. And there's a lot of applications for that and the sort of things we're doing. Often it can save people going out in a visit to the field. But you can see all of this looks very much like, you know, Google Maps or Open Street Map, the kind of things that people will have seen elsewhere. We do a lot with this search window here, sort of single box search. Again trying to simplify, a lot of typical GIS's have queries where you have to specify tables and field names. Here you just type a poll number or a valve number or whatever you're interested in. You can save bookmarks here. Let's see if there's an area here. And we do a lot of queries through this search box too. So again, if I just start typing poll, I can say query polls in window, you get all of those back. So trying to have this very, you know, sort of intuitive interface here. And then one last thing I'll show you is trying to simplify more complex operations from the GIS. So one thing that you have in this kind of space a lot is network tracing operations where you're tracing through the network and there's all kinds of different parameters to trace upstream or downstream and look at things. And we try to really simplify that to where the sort of users that we're targeting don't know about GIS and they don't know about traces, but they do know the concept of the poll upstream or the device upstream from where you are now. So we've just sort of pre-calculated a lot of these things. So if we click here, we can say show me the next upstream device. This is a transformer I've clicked on. And it shows me that switch is the next upstream device. And then the circuit is the set of all the devices that's fed from a single point. If I click on show me the circuit, then you can see it very quickly highlights that circuit and you can jump to it. And those kind of things typically take multiple seconds back in the GIS at the back end. So again, we've really tried to think about a pre-calculating results from more complex operations but that these large users are, you know, these large numbers of users are going to use and then really presenting it in a way that makes sense to the user, not in terms of GIS operations kind of thing. And then finally the thing, I saw a tweet from Javier from CartotDB yesterday and he said something like the future of GIS is not one application with 100 buttons, it's 100 applications with one button. And that's a philosophy that we subscribe to. There may be not one button, but maybe three buttons or five buttons. And so this is something we build into our app is the concept of multiple applications. So here, you know, if I click on the standard application here, it comes in with a bunch of different layers, electric network and whatnot here. If I go back to that main screen, I click on vegetation management instead. It's a different set of layers that are relevant to that particular application and it's got a specific toolbar down there for that particular application. And again, that's something that we really try to do is to say let's have a bunch of different focused applications for different users. So it's important to build that into the system to make it just easy to deploy those things. So those are a few ideas I wanted to share just on keeping apps simple. Again, it's a theme like we heard from Vladimir and so on, but I'm very much a believer in that. So let's switch back to the presentation and we'll talk a bit more about the second topic offline. I guess I skipped a couple of things here. I'll just quickly run through some screenshots showing an example of the kind of applications we want to run offline with these customers or in the field. So damage assessment is a key one. After a storm, all these poles are knocked down and whatnot. They want to send people out and evaluate the damage and be able to push that back in real time as much as they can to the operations center. So here's the operations center. This is a screen on a desktop. Nothing started yet. There's no damage shown. We switched to a guy in the field and he has a pretty simple application where he can click on a point, enter information saying there's a pole down here and create a damage point. And then back in the control center, assuming there are the guys online, you see the little eye where that damage point is and then you begin either thematic map saying we have some damage in this county here. And so that's kind of maintained as you go along. And so then if we go back to the field, the little thing down at the bottom is saying he's offline now. So especially after a storm, but also just because these guys work in rural areas, they will be offline quite a lot of the time. So the aim for the app is you need to be able to seamlessly switch between online and offline, capture data offline, but as soon as you connect it again, push it back. So he can go ahead, the guy in the field, create a couple more points. So now we've got two points out in the field. He's offline. So if we go back, there's still just that one point back in the operations center. You reconnect now and then it automatically syncs behind the scenes and now you've got three of those points back there. So that's just to give you a kind of flavor for what we're trying to do. And then another requirement is capturing photos in the field and pushing those back too. Again, the same general wish that they want to be able to store those offline. And so overall requirements for the offline app, you know, it's really very strong desire for it to be cross-platform, Android, iOS, Windows 7 and 8, and then also web. So we really want our existing web applications to be able to run there. So we'll talk about the different approaches. We do need the ability to sync large data sets for our app. I mean, potentially gigabytes of data, a lot of these utilities would like to sync either all of their database or a large region because they might, you know, especially in these damage or emergency response situations, they might be offline for quite a while. In other situations, you might be able to use smaller data sets like a single circuit. And that enables a different technology approach, HTML5, which I'll talk about shortly. In terms of the characteristics of that sync, there tend to be two different types of data. There's your sort of bulk GIS data, which for these customers doesn't change very rapidly, but, you know, it does change daily. But so there you have this bulk read of multiple gigabytes, typically, and then you really want to do a nightly incremental update, but nightly is good enough for that. But then you have these more time-critical data sets like the damage assessment data where you want to sync as soon as you're connected. And there you really need to connect and disconnect and sync transparently. So there's basically three different architectures we tried to do this. We'll talk about each of these three, so I won't dwell on it here. But basically two, using replication, the way you really store a lot of data offline. The other one using HTML5 and more dynamic caching. So the first one was really just running a replica on a Windows laptop. This let us use exactly the same stack as we had on the server with PostGIS, MapFish and everything. So in many ways, this was the sort of simplest option, but it only runs on Windows, well, Linux or Mac, but this won't port to Android or iOS because of the software stack. You do need sort of custom replication for this. At least I haven't found any replication for PostgreSQL or PostGIS that sort of handles this model where you're disconnected for long periods of time and then you re-sync. There are replication solutions, but they're more for kind of backup or failover kind of things where they really assume you're connected all the time. So this one is relatively simple to implement, although there is quite a lot of detail in doing the custom replication. The cons are it's laptop only and a fairly heavyweight software stack. But that works. So the second approach we used was PhoneGap, a PhoneGap replica. And so the idea here was we've had a few other people talking about PhoneGap or Cordova, basically the same thing, Cordova's the open source version. And so if there are anyone who doesn't know, basically what this does is it takes your web application, HTML and JavaScript and it compiles it to a local app. But it is up to you basically to take the data and have that stored locally. So to do this, we basically replace PostGIS with SQLite, which runs on all of the platforms we needed, both for geometry attributes and also to store the tiles, basically an NMB type format. We also had to rewrite a little bit of our server side code because that was written in MapFish and Python. So for the key services that we needed, we kind of rewrote those in JavaScript to wrap the database and then kind of refactored the code just so those differences were in common classes. So the great majority of the code stayed the same. And that's what it looks like just on a chart. I won't dwell on that now. Let's see how we are. I think I will just, we just about got time and we got a break before. So I'll give you a quick like two minute demo just to give you a feel because there was quite a bit of discussion in earlier sessions of how well does PhoneGap work, is the map smooth enough and that kind of thing. So I'll just let this video play it's about two minutes and we come in kind of halfway through when I'm just going offline. I'd shown a few online. Go offline. So if I switch to airplane mode, we lose our network connection. And what you'll see now if I zoom out a little bit is, we actually have some data cache there. But in general, I'm starting to see now that I have no background map because I have no connection to Google here it's saying, sorry we have no imagery here. We can set this up to flip automatically but basically I can say I'd like to switch to have a local open street map backdrop now. So now you can see this is a locally stored base map. We've still got the same capability to pan and zoom. This particular base map just covers certain zoom models. Let's zoom in again this time if I select the same poll you see there's no street view here it knows the soft line so it just doesn't show that. But we have other capabilities that we can show like if we want to see polls in window for example. We can run that. It shows all the polls in the window here. Quick to go to a specific one and that kind of thing so regular kind of functionality that you have. I can search for poll number for example, go there. So again all the same functionality that we have when we're regularly online. So I think that shows you the principle I go through a couple more things but basically you know we're just porting everything over to SQLite. I mean in effect you do get all the same functionality but just except for things where you obviously need a connection like Google Street View and so on. So in general we found that that worked pretty well but there was quite a bit of detail and fiddling around. Again there was some discussion earlier of people's experience with phone gap but overall ours has been pretty good. You know with it you do hit odd little tricky things where you have to spend time just figuring out what caused that difference between the two but we've been working on it for a little while now and it seems pretty robust and works pretty well for us. And basically the benefit of having one common code base for web and mobile across the different things I think you know makes us lean pretty strongly towards that versus going to developing native apps for the different platforms. Obviously might depend on your requirements a bit but for us that's been the case. One caution with iOS we have increasingly while I'm a bit of an Apple fanboy in general I have got quite fed up with Apple in terms of these enterprise applications that their terms and conditions can be quite a pain and they've tightened them up quite a lot. One basic thing is that Apple has to review all code including fixes and what not which can be a multi-week process. And it used to be that you could deploy testing systems with this system called Test Flight but they've clamped down on that in the last couple of months so even quote-quote beta things that you're deploying through Test Flight have to be approved by Apple. And so there's quite a bit of overhead there and then our apps tend to be these big complex things that are customized to each customer and there's Apple has a couple of different ways of distributing those so the apps don't go through the app store they're specific to a given customer and I won't get sidetracked on that but just by way of warning just there is quite a lot of complexity here and we've hit quite a lot of issues. So we're encouraging people pretty strongly to look at Android for these kind of enterprise deployments at the moment. Maybe that will improve but that was our experience. And then the final thing I wanted to talk about was the experience with HTML5 offline. So there's constraints to what you can store with HTML5 offline in terms of volume of data but it does work quite well with smaller amounts of data so in particular for that damage assessment application I showed you quite often what they'll do is walk one circuit at a time which is a relatively small area they'll give that to one person and so it's quite reasonable to download that using HTML5. And this is much more dynamic you know you don't have to install apps you don't have to do a big data sync initially that kind of thing. I think in the interest of time I'll skip the demo on this one if anyone's particularly interested I can show the video afterwards but I'll just explain what we did but essentially it's very similar functionality. It's a bit more limited because you don't have a full database with HTML5 at least with what we've done now so it is more of a subset of the functionality but you can basically you know pan zoom select you can create these damage reports like we saw before and push them back but we haven't implemented anything like spatial query or anything like that and this is still prototype stuff right now I mean the other stuff that we've done is pretty solid product we've been prototyping this stuff. So I just wanted to explain a couple of aspects of HTML5 for people who are not familiar with it. There's kind of two separate things when people talk about offline storage well two with variants of them but one is what's called app cache or cache manifest files and this basically lets you list a set of files that get cached when you're offline and then there's offline storage and there's various types of that I'll talk about where you can more explicitly store things under programmatic control. And this is what a cache manifest file looks like I have to put a little bit of code up here at some point but basically the top files are sort of static HTML, JavaScript, CSS files so that's relatively static and then beneath that we've got a bunch of image PNG files listed and then with what we're doing we tried a couple of different ways of getting attribute data down one was doing tile JSON files which gives you sort of a simple spatial indexing if you click some way you can figure out which tile would it be and look there for that or just a single geojson layer if it's a relatively small area. But anyway so you sort of can dynamically create this cache manifest file for a certain area so it's a combination of the static stuff up front and then the dynamic tiles for the area that you're in. Just a couple of other basics on and so on the offline storage side which is separate from that cache there's multiple different types of offline storage and there's quite a bit of inconsistency between different browsers I won't get into the detail here. There's also quite a lot of variation in the offline storage limits. I've got links on this but these are all from HTML5rocks which is a.com which is a really good site for this kind of thing. So anyway those limits vary quite a lot but a library that we found that is really good is called large local storage and that abstracts all of these differences and in theory gives you unlimited storage up to any hard limits on the device and it uses different underlying storage mechanisms so we've used that and definitely recommend this as a mechanism for working across browsers or you know for using offline storage. So anyway back to the specifics of the app you know we found that caching individual tiles using the app cache it works reasonably well for fairly small areas so like one electric circuit which is you know probably less than a square mile or something. Our example we had 800 tiles and it maybe takes one to two minutes to download that on a reasonable network connection so that was kind of manageable. But there's a couple of drawbacks I mean one is that app cache is all or nothing if it fails at any point then the whole cache gets rolled back you have to download every single file so that's not very good as you get into larger data volumes. Also just thought I thought maybe this is on a separate slide here but basically it's okay for smaller areas but just copying lots of small tiles you know is intrinsically slow right whether it's on a web environment or anywhere else and so when you get to really large data volumes doing a tile at a time doesn't work very well and I'll come back to that in a second. The other thing is there's no real ability to manage or delete those caches programmatically you can't get at that from your application code so the user has to know more technical stuff if they want to clear caches out and that kind of thing. I'm nearly done there. So anyway and so this mechanism does work fairly well you can also do things to cache individual tiles in local storage using more programmatic control you know that means writing a little bit more code but it does give you more control over this thing that it's no longer all or nothing and you can delete them out more and what not so for sort of a serious effort that's probably more worthwhile. Yeah, so I guess this is what I was saying for individual things, for individual tiles it's very slow to copy them but so we have done some experimentation recently in downloading MB tiles files and there's a library called sequel.js which actually worked, I was pleasantly surprised how well it worked and so we were able to download an MB tiles file which is like massively faster than downloading the individual files and then unpick it using sequel.js and that actually worked pretty well in Chrome on a laptop or a high-end Android for a file that was 275 megabytes which is a pretty good size area you know it was tiles down to a fairly deep level for a city. We had to overcome a few memory issues and flakiness but we got it working reasonably well but unfortunately our main target for that particular project was who wants to run it on iOS and we'd kind of naively assumed if it worked on Chrome it would translate but it turns out that Chrome on iOS is built on WebKit and has the same 15 megabyte limit as all the other browsers on iOS so there is still quite a hard size limit for this on iOS but for Android, for a high-spec tablet and for you know Windows or other things with Chrome this does look quite promising so this is work that we've done quite recently and we haven't fully followed through on that but I think this using MB tiles or some other consolidated file is something that you really need for larger areas just because it's not very practical to copy down the individual tiles. And then finally I think I mentioned this briefly but so for attribute data I guess you could do more with the local database and so on and construct something more complex I mean we're trying to keep this fairly lightweight so we've used two main approaches one is geojason tiles like I mentioned where you tile them up and you can do that basic select operation you can just figure out which tile that we click in and then you have a manageable number of features there to search through and make sure you can find what you selected quickly or the other thing is say for relatively small areas just download some geojason for the whole area and leaflet handles that pretty well the selection and everything. And then lastly just a couple of other options I thought I'd mentioned in case you don't know them one thing is I think all field papers which has come out of open street map used to be walking papers but this is kind of interesting for certain applications you can just print out a paper map you can sketch on it take a picture with your cell phone and email it back in and then it orients that scan map you know behind your web map back in the office so it's a kind of low tech solution but it can be useful for capturing the field so that's just one interesting other option for field applications. And then one other little thing we experimented with recently is either maps on a USB stick or this thing called an air stash I just got which is like a USB stick but it has a Wi-Fi thing built in so this is actually a solution to over that could overcome that limit on iOS that if you're just storing tiles and JSON you can't run any server software on this you're purely accessing files but you know you can do basic tile maps and this simple JSON selection using that and you can stick an SD card in that I got 128 gig SD card so you know no will issues on size but the particular customer we were talking to didn't want a separate device they wanted it all on the phone so they didn't like that but I thought it was kind of another cool option to explore. So anyway quick summary on the offline I mean it will be great when we have universal wireless coverage it will simplify all these problems obviously. Today offline is still a bit harder than you might hope. I think large scale robust sync for enterprises isn't rocket science but there's a lot of detailed work to do well and there isn't really anything existing in the open source world that I know of that does that quite what we need there. HTML5 caching does have a lot of promise and I think it's workable in some scenarios especially for quite small areas but it's still not quite fully baked you know it's a little bit flaky it's not well documented all the differences between platforms and what not so hopefully that will really improve in the next little while here. And then there are these alternative options like fuel papers or external storage whatever that may work in some cases. So anyway that's it a quick run through what we've done on offline. I don't know if there's any I know we'll touch over probably any quick questions. One at the back there. Yeah. Oh sorry I guess we've got a mic here so I'll take this one first. I'm wondering when you have somebody like an electrician or a field engineer going out to fix a poll for instance right so maybe they're offline because it's in a disaster. How do you how do you figure out if you're going to give them a scanned map tiles for local or imagery and when you do make that decision how much how big of an area. Yeah so that's a good question and it does depend very much on the work the workflow you know you have to understand what they're going to do so in this damage assessment scenario usually they go and walk a whole circuit which is a predefined area so we'll give them all of that and then we kind of agree with the client you know do you need imagery or do you just need an open street map based map or whatever so that in our apps that tends to be all kind of decided ahead of time pretty much that we look at the specific workflow so I don't know if that kind of answers the question but it does depend on the scenario. Another common one with our customers is they when they go into a manhole they lose wireless connectivity so that's something that you can predict so you could have a button that says hey I'm about to go into a manhole and then you presumably just need a pretty small bit but they still want to be able to look at the records and that kind of thing. So earlier in your slides you mentioned replicating from post just on a server down to spatial light on the devices so is that something you guys got working well or and if so what were your experiences with that? Yeah, we do have it working pretty well now and I think did we use Ogre2Ogre Mike because there's sort of basis for that. From the... Yeah, sorry, so we use OGR to OGR or Ogre2Ogre is the basic thing but then... So it tends to be that the way it works is we package up, I mean we typically have several hundreds to several thousand offline users so it's usually packaging up a database once which might be the whole thing or might be a subset and so we have it working pretty well but it has been quite a lot of work to get there. It's like this thing where there's just lots of little details and what happens if you have a datamol change and there's a whole bunch of things when you're doing large scale deployments that get kind of to be a pain and they just... They say they're not rocket science but it's just a lot of little details so it has been a fairly significant effort to get it going. So when you say you use Ogre2Ogre do you create a spatial light database of changes that you send to the devices or are you sending GeoJSON or... Yeah so we do this initial load and then we do the nightly changes which will package up and again we do one package of nightly changes and propagate that to all the things. I think those changes GeoJSON might and then we zip them up basically and download them and then run a little script on the client that kind of unpacks them basically. Okay thank you. We're curious in the things you've been doing with PhoneGap, what are the advantages to PhoneGap versus HTML5? Really the biggest advantage to PhoneGap is having this larger storage capability so there we can have a native SQL light database and we can store multiple gigabytes. That's the biggest single thing that there are... You also have better access to native things on the phone which we don't need for most apps but we do have one app where we interface to like Bluetooth gas detectors and so for that one we're going to need the native app. The advantage of the HTML5 is it's on the whole a lot easier to deploy like a scenario with this damage assessment is you get what they call mutual aid crews that come in from a different utility and if you can just send them a link and like in two minutes they have a working app they can unplug and go that's huge and obviously shipping updates and all that kind of thing is a lot simpler but the biggest limitation with the HTML5 is... Well, the biggest single limitation is the data volume and then say just still it's a bit of flakiness and so it's definitely promising. I think especially as you get better and better wireless coverage I think more and more cases will be covered by HTML5 so I expect over the next couple of years we'll do a lot more with HTML5 but we still kind of need the base capability of syncing a large dataset offline. Just a last question. Peter, I was wondering if you discovered a way to take a cache manifest and a package of files, sort of package them together and deploy them without having to put them up on a web server first. Is there a way that you can put those on the device without having to have those on the web server first? Well, kind of, I mean we sort of looked a little bit at this idea of maps on a stick, a USB stick or an SD card and in a lot of cases with the stuff we were doing with HTML5 in effect you end up with an app that is running offline where there is no server and so to some degree with fairly minor changes you could often package that same set of HTML files, just put them on an SD card or whatever and then just run from that locally which has a few slightly different characteristics but broadly speaking will work. So like this wireless device that we had we used more or less the same set of files that we'd cached all the tiles and everything but you just had a simple app that was going against these files stored on the device. But I don't know if that makes sense. File URLs then instead of HTTP? That varied slightly. I think in some cases file URLs work in other cases the security on the browser might not let you do some things with that. With this external device with the Wi-Fi I think we could actually access them as HTTP files, that airstache device so that seemed to work, still serving them through HTTP though we couldn't run any extra software on there, we could just serve the files. Okay. All right, thank you. Yeah, thanks and if anyone has other questions I'll hang around for a bit. I also have the iPad offline app here if you want to just have a quick play with it and see how the phone-gap thing works. So yeah, thanks.
|
This presentation will discuss enterprise web mapping and mobile applications that we've been developing for large utilities and communications companies, based on a number of open source geospatial components, including PostGIS, MapFish, GeoServer and Leaflet. It will discuss development of offline mobile applications using both PhoneGap to compile to native applications on Android, iOS and Windows, using a SpatiaLite database, and also use of HTML5 offline storage. We will discuss ideas on how to create extremely easy to use but still powerful applications, using approaches inspired by consumer web mapping sites rather than traditional GIS. The presentation will not be deeply technical but will include material of interest to developers as well as end users and managers.
|
10.5446/31704 (DOI)
|
Thank you. Now our new name is Mobile Map Technology because I'm a trade market user and as we are now in location tech we have this new name. My name is Manuel de la Calle. What is God Tree Mobile Map Technology? We are using exactly the opposite approach to the talk before. First of all we have an open source API. This is so obvious here in the phosphory. We build native maps application that runs on any device. We have developed an architect to have a native API and all the platforms. We have a SDK cross platform that you can build very fast native maps application and also is 3D. Our repository is here. Our license is we are using a very open license. We are using a VSD of two clauses. Now we are going to change to an APL that is the same. It's the same license. We are going to change because we are going to the Eclipse Foundation. This is our source code, our repository. We developed directly here. We have thousands of branches that is where we are improving. Our main branch is the purgatory. If anyone wants to try it, the repository is there. When we start with this product two years ago, we are talking different things about the mobile devices. First of all was the fragmentation. This problem I suppose to everybody that are developing mobile applications now about this problem. This problem is every day is worse because the Android devices are filling with different screens, different processors, different memory and also different operating systems. Every day we have a new fork of Android. We have the same problem with browsers. Now every browser has a different engine. So we have other problems with that. The only platform that seems stable is iOS but it's absolutely closed. But for development it's better. We only have two, three screens. One of the biggest problems for developers is the fragmentation. We face it in this problem. Our problem is the performance. In the talk before, they have talked about the problems with these 30 moments on the like the panning or zooming. Maps are a high performance application. We need all the resources on the mobile device. The browsers are working better and better in the last times. But for most of application it's not enough. We are trying to do every day more complex application on mobile. The performance is a thing that's absolutely important. We are absolutely performance obsessed. We are working all day in the performance. Our globe works very fine in all platforms. Our worst case is Android because we have more problems but the performance is so good. Other problems is usability. The usability on the mobile, everybody knows that the resolution of my finger is not the same as my mouse. On the mobile, you use the finger. We can't make smaller applications on things that are going to work in the same way on the mobile. From the beginning, we have designed our IPA, thinking and usability. Easy to code. We want to have good applications in a very short period of time. At the end of the day, we have the very same IPA in all platforms. Android and WebGL shares 75% of code when you are developing an application. If you are familiar with Android developer or iOS developer, to have a map with a globe is two lines of code. After that, you need to add your functionality but it's a normal Android functionality or iOS functionality. The architecture is complex but for the developer, it doesn't mean because at the end of the day, he has a native IPA. This is our architecture. It's the way that we have a same IPA in all platforms because we don't want to code three different IPAs. Our solution was we have developed a core in C++ where manage all the more important things of our tool. We were in the core with OpenGL and all the complex stuff. This core is translated with a very simple translator to Java. In that moment, we have a Java IPA. Then also after that, we translate another time with Google WebTourkit and we have a HTML5 application. As Google WebTourkit is developing Java, the applications in Android on XML5 shares a lot of source code. It's a little complex but works really fine. It's easier to maintain that the bindings, we have better performance. Translate code is a very interesting because when we have a problem in HTML5 or Android and we fix it in the C++ part, all the whole solution is improving. We have problem in Android and improve the iOS solution at the same time. But for the developer at the end, it's the same. Always has a native IPA. Now we are as we have a C++ core, we can add new platforms. We are adding now Windows site platform. Okay, well, this is different screenshots. Now I'm going to show you the world working. This is the first iOS, Android with 3D buildings and 3D model moving. This is on Google Glass, working on Google Glass. This is in the web. We have different, you can work with different views. We have the globe view, you have a view that we call scenario that is simply a part of the terrain bounding box. In our case it's called a sector. Also a flat world. We can add any kind of data. We don't have developed parsers. We only have developed one parser for JSON. When we need, we change the format. We have some problems with 3D model formats because there are no standards at the moment. But we are waiting for it. We can put MDTs, raster information, point cloud, any kind of data. One of the capabilities, we can work offline or online. Always the mobile solutions, the mobile application are always a trade-off between server development and client development. We face that. We have part of things we make on server and part of things we make on the client. Also we have developed a server for work in real time. We have implemented this server, works fine. We can push real time information on our applications. We manage the cache. You can work absolutely offline or absolutely online. We use SQLite that could be shared between the platforms. Now we are adding support for MBitiles. You can manage completely the cache. You can decide if you want to cache for a time that you need. For example, if you are making a weather application, you don't want to cache. You already have to say to the definition of the layer that you always want to refresh. It's more fine. We have camera models animation. As we have, you can do 2D or 3D applications. In the 2D application, you don't need to move the camera, but in 3D application, you can move the camera where you want. Concepts, scale, things like that are absolutely different. Moreover, we have other utilities class in the API that you need to develop the applications. For example, we have problem with the tasks because in the HTML version, it's an asynchronous environment. The iOS and Android are synchronized. Most of the time, you need to do tasks in background. To do that, we have classes that make that developer doesn't mind about that. We have developed that for the classes because we need to put things over the camera. We have several tools for that transformation because we don't want to do the parses. The tools that we use for that normally is using another library, a very good library because that transformation is one of the things that we have better libraries in the market. The last new thing that we have done is, for example, a vectorized library. We have these releases as all the tools. We have a server part on a client part. It's a vectorized library. We import the data we want to prepare the parameters in a post-DIS. We have a process, a Java process that builds a parameter. The parameter is a GeoGyson parameter that could be consumed in overliers or in our library. Of course, in our library, it's very, very easy to consume. You only have to indicate where is the server. You can put the parameter where you want. It's like the raster, like the raster parameter, but in GeoGyson. Words, at least in our client words, very nice and very fast. Now one of the last things that we are doing is the streamings of point clouds. This is an absolutely better. Our first version was two days ago, but I want to show it. This point cloud, the original point cloud, are three billion points. We have reduced sample the cloud, and here we have 160 million points. It's HTML, the WGL version. I have the Android version working also. The important thing here is that we are showing only meaningful points. All the points that we are showing are saving the shape. Here we are showing, no, 40 points. These points are not arbitrary. It's actually with the shape that we have to. We are near. The points are common. Of course, it's 3D. It's impossible with the mouse here. This is the idea. It's working. This 3D, we are at the process. I think it works something. Hello. OK. This library that we are in development, but now, it's working on the third version. is working on the three platforms. We have to improve the performance, but we are sure that we have margin to improve. We have a process that imports the entire point cloud from a Berkeley ADV. We import all the whole point cloud with parameter. How we want the order point cloud that we want to show. And we have a server part for this. Finally, we have a server that gives us the point cloud streaming that now, depending where we are on the point cloud, what points have to give us. All points are ordered with a quadtree. And that works really, really nice. We have a native support on the whole tool. It's really easy to add the point cloud. But as I said before, the close side shape, as we say, basically. OK, I have here a small video. I don't know why the video is not working. Give me a moment. I have no internet connection. OK, I have the copy of the video on my computer. No problem. Up this end of the video, I want to show different things that the API can do. It's a real video with normal speed. This is the growth removal application. This application could be downloaded on the Play or on the iTunes Store. It's this application. We use this application to show that you can do with the API. And the code that produced all these things is on the repository. And if you want to have this application deployed for you, you only have to download from the repository. I use it. And you can see how we make these maps. These are raster maps, normal maps. This is the map box. We can put any raster layer. No problem with that. We have a kind of layer called template layer where you can parameterize. You can use any template layer. There are two blocks working at the same time. No special internet. You can put 11 blocks or whatever you want. This is a scenario with a terrain model. This is near my hometown. This is working on Android. And iOS works in the same way on the WebVR also. We have vector. Vectorial, this is JoJson. This is no vector layer. It's normal vector. You can click on the markers. You can have all the normal things in a vectorial library. We have a very powerful symbology library. And it's very interesting. One of the things of a 3D library is you can symbolize in these ways. This is very easy to do. And it's very easy for performance, this kind of symbology with shapes or with robes. This is a point cloud. But this point cloud is not like the other. This point cloud is offline. This point cloud is on the device. We can show the same density of points with the streaming. But there are very large point clouds. We can show. This is a 3D model moving with the tasks. This application, this is a demo that we made for our island. We get the data from the plane instruments. The plane is giving us all the data. We get the data from the plane instruments. All the information about the position of the plane, about the pitch, all the things that the plane is doing. We move the plane. This is a point of interest where you can go. There are a lot of views. This is the idea for the maps into the plane for any future where the company is probably going to have internet connection as server and internet connection on the planes. In the States, most of the companies have internet connection. And Europe is not a normal thing. But the idea here was that all the tiles, raster tiles, was on the plane and a little server, a small server. And all the movements is produced with the plane instruments. And you can go where you want with the plane. I think it's all. Any questions, please? For the point clouds, are you reading native point cloud formats? And are you reading all the dimensions? Or are you just XYZ and treating them that way? We can do it. But in this moment, we get a last point cloud. I don't remember the name of the tool. We just link to XYZ to import. But it's not a big deal to read different formats. The problem is to order and all that stuff. But we think that it's not very hard to do that. But now, for this thing, like other things about formats, we don't develop anything. We use other good readers that there are a lot on the market, on the false market. Any other questions? Yeah. Thank you. I'm here. If anyone want to question me something, I'm everywhere. I'm defrauding. Thank you. I'm here. If anyone want to question me something, I'm everywhere. I'm defrauding.
|
Glob3 Mobile is an API for the development of native map applications on mobile devices.The main capability of this library is the Multiplatform approach, it have the very same API in all environments thanks to coding translation.Developing with Glob3 Mobile you can save time and resources when you face a mobile development having all advantages of native development (Performance, UI, Access to disk, sensors, etc) and the simplicity of an API thought for GIS developers.During 2013-2014 G3M has been growing in capabilities and is now a solution to face the development of any map application on any device.In this presentation We will explain the architecture and the main capabilities of this library and we will show some examples and demos and use cases with the API working.Glob3 Mobile has been developed thinking in the usability and the UI of mobile devices.Currently Glob3 Mobile is working in the next platforms:* iOS* Android* Google Glass* html5-webgland it is planned to add others like Windows 8 or Java Desktop.Currently also g3m has been used as offline AR engine for wearable devices.The capabilities list is huge but the main are:* Raster data* Vectorial data* Point Clouds* 3D models (Buildings, cities, vehicles, ...)* 4D Data* Real Time* Simbology* Offline - Online -> Cache Handling* 3D- 2D - 2,5D views* Scenarios* Animations* Cameras* TasksDuring 2014 Glob3 Mobile will become part of Location Tech (Eclipse Foundation) and will change the name and license:Glob3 Mobile --> Mobile Map toolsBSD -> EPLA different use cases of Mobile Map Tools.Vazz: http://www.vazz.tv/start Galileo: http://galileo.glob3mobile.com http://galileo.mobilemaptools.comAero Glasses: http://glass.aero/
|
10.5446/31705 (DOI)
|
Okay, I think we'll get started. So, I'm Frank Womerdam. I am a geospatial developer for 20 years, PCI in the 1990s. I worked as an independent consultant doing GDAL, which is the thing I'm probably best known for through most of the 2000s and then spent a couple of years recently at Google. And now I've been at Planet Labs for the last 13 months. So, I'm an OSGEO director for a long time, although just finishing off my term now, so not anymore. At Planet Labs, I'm actually working on the data pipeline team, so my particular responsibility is after we've got orthorectified images that turn them into mosaics. So, Planet Labs is a small satellite company. So, we build and operate Earth-observing satellites in a sort of a cubesat form factor. So, it's 10 centimeters by 10 centimeters by 30 centimeters. You can see not an actual satellite in our booth, but the same shows you the form factor. I don't know why we don't bring real satellites anymore, but that would be cool. So, our goal is to build satellites cheaply with sort of consumer-oriented electronics or, say, not space-hardened electronics so that we can actually build them quite inexpensively, but also so we can get small electronics so we can actually build a small package instead of using, you know, technology from the 1980s or 90s or something like that. We're actually using current-ish technology so we can build small and cheap and launch a lot of them. So, our stick isn't really to build the best satellites in the world. It's to build the most efficient sort of cost-effective satellites and to fill certain roles. So, we talk about the company. We often refer to us as being in new space. So, it's basically this idea of commercializing space and doing things in a new way instead of the old traditional way with, like, you know, 10-year-long development programs and billions of dollars. But it's also, you can see things like SpaceX or there's a bunch of other earthcast and other folks that I would call new space as well. So, basically, trying to do space technology differently. So, I come from this from a sort of a traditional geospatial background, not really knowing much about space tech, but one of the things that's really exciting for me working at Planet Labs is the fact that they're kind of changing how space happens. So, I think, you know, a lot of space at Fixing Autos, I'm sure many have for a long time, but everything always seemed to go so slow. So, can we move it go back, make it go faster? But I'm not going to talk about that in great detail today. We have launched approximately 70 satellites so far, of which a smaller proportion are actually operational at this time. So, that, well, impressive also the fact that we can have some satellites that have already deorbit it or that don't necessarily work is a thing that speaks to the fact that we can do it small and cheap so we can afford to take risks and push fast and things like that. Our ultimate goal is to be imaging the world every day. So, by that, I mean actually collecting five meter imagery of the world everywhere and new fresh imagery every day. So, of course, a lot of that would actually be under cloud cover and stuff like that would mean actual brand new daily mosaics, but our hope is eventually to be building mosaic every day even if the parts that were under cloud are actually imagery from a few days before whenever the last cloud covers. We're operationally, we're quite always away from that. So, it would take hundreds of satellites, it will take hundreds of satellites presumably to get to that goal and we are doing things every day to improve our operational effectiveness. So, as a company, our products are basically standalone images. These composites that I'm going to be talking about, composites mosaics with a sort of frequent timescale and ultimately derived information from that. So, things like change detection and other kind of information products. So, ultimately, it's too much data for people to be consuming directly. The idea is we would be boiling it down into information products that then they would drill in and look at particular things as appropriate. But I'm really going to stay focused on mosaics today. That's enough of a topic probably for 20 minutes. So, unfortunately, that's about all you get to learn about Planet Labs in general. Oh, no, I got one slide. Okay, here's a picture of some of our jobs. So, the point, one of the points of this slide is that we've actually gone through a whole bunch of different builds and designs over time and basically every time, every few months when we're launching new satellites, it's a slightly improved design from the last one as opposed to design cycles in sort of traditional aerospace where it might be, you know, every three or four years they're launching a new satellite with improved designs. Okay, so my challenge, my part of the work I do at Planet Labs is try and build the composite, try and build this global mosaic. So, we obviously would like it to be seamless and cloudless and radiometrically consistent. So, we're building this out of basically millions of scenes. So, the idea is that we would have many, many images of the same patch of ground and we can sort of pick through those either on a pixel by pixel basis or a patch by patch basis to try and get the best pieces out of each one and we're perhaps the most current pieces or whatever is the appropriate criteria to build this global mosaic. So, the scenes might be spread over weeks or months. One of the things we want to be able to do is kind of make dynamic decisions about how to build this mosaic. So, for instance, I want to be able to fire off a thing which would be a three-month mosaic, taking the best images out of three months of the world. But I also want to be able to build mosaic that might only be imagery from the last week or two or something like that, even if that means having to take lower quality imagery but basically so it's showing something more current. In some cases, we might be building products for someone in the ag sector where they really want to see just one piece of the growing season. So, we could say build them a mosaic of all of Brazil which is just imagery from these two or three weeks and then that tells them something very specifically, a picture of agriculture in a whole country in a fairly narrow time zone. So, these are sort of in traditional Earth observing you could do this with MODIS or possibly with Landsat over longer stretches but we're, I like to think that we're the first people who would be able to do it at this kind of frequency at around the five meter range. So, okay. So, it's imagery also from different times of day potentially. So, we actually launch our satellites into a variety of orbits. We're kind of launchers of opportunity in the sense that when we can get our satellites on as a side cargo on launches, we'll take a wide variety of orbits whether they're kind of ideal for us. Well, that does mean that the imagery going into our mosaics has a variety of different conditions. So, from different elevations means different resolutions. Also, different, some of our satellites are in sun sink orbit which means they're always collecting at roughly the same time of day for any given location on the Earth. Other ones there are things like the space station orbit where you're actually collecting at all times of day and from means you have funny sun angle issues and different lighting and stuff like that. So, one of the challenges is really to correct for that. And obviously, lots of different weather conditions. Okay. So, this is the part where I make fun of everyone else's mosaic. So, we've all wandered around in Google Earth or, you know, and all of these other imaging mosaics and you get a little bit out of the city and suddenly start finding all of the quirky things. One of the things that actually motivated me when I was considering the change to Planet Labs is I was trying to help a little bit with the open street map mapping in the Philippines. And there was actually some not bad imagery in Bing, for instance, right along the coastline. Somebody had obviously acquired it. It was interesting from a tourist point of view or something. But you didn't have to go far inland before you're in the Landsat. So, these global mosaics that say at the consumer level are actually important for all kinds of processes for people to understand about their places. So, not just for science, not just for industry, but for people to understand things and also to respond to disasters and so on. So, for instance, that was an example where I found it very difficult to collect roadmap data in the Landsat areas. It's just you could not resolve things in any sort of useful fashion. So, it's one of the things I want. Okay. So, I'm going to make fun a little bit of other people's mosaics. So, the main backdrop we actually use is our base map when we're doing all of our testing work at Planet Labs is the current map box satellite. So, and it's actually a great product in a lot of ways and they're really a guiding light to me, the map box satellite team in a lot of ways as far as important techniques to apply but I'm going to just show some of their old stuff. So, of course, they can clearly say, oh, this is just the old thing. They're doing new stuff. So, that's good. But so, one of the points I want to make here is this is an area in Brazil. So, it's not considered of high priority interest. It's not in the United States. So, you still actually end up with imagery that's cloudy. This is actually a part of Brazil that I've been using and doing a lot of practice on because, in fact, it's almost always cloudy. It's almost always these popcorn clouds going over. So, if you take a strategy which is try and just find a cloud-free day, try and find a whole cloud for your scene, it's actually a very difficult place to do it. So, during a large part of the year in particular, I guess maybe it's the growing season, I'm a bit vague on the details. In fact, there's almost always popcorn clouds going over top. So, this is sort of typical of that. Obviously, there's been some effort that went into at some point sort of collecting the best cloud-free image that they had at the time to build this mosaic, but that is still not that great. So, that's really, I think this is actually from Landsat, this layer in theirs. This comes from Bing. So, this is actually much higher resolution imagery. It's actually not. It's not too great on this place. But clearly, you get the point is, once again, cloudy. So, this is actually a non-trivial-sized town, but it's an area part of the world that people are not that interested in, doesn't justify aggressive acquisition costs and things like that. The other thing that I really want to see here is cloud shadows. So, I'm going to talk about this a little more. But one of the nightmares is cloud shadows. So, in this case, they're not real. They have the clouds too. So, they don't really worry that much about the cloud shadows. But one of the real challenges in putting together mosaics isn't just getting rid of the clouds. It's getting rid of the cloud shadows. So, the clouds, you can basically try not to get light-colored pixels, prefer darker pixels. But if you take that to the extreme, we're going to see some of the later examples as you basically get a collection of cloud shadows. So, this is actually supposed to be an example of a success. So, previously, I was working at Google and for the last year, I spent time on the Earth Engine team. And one of the things that the Earth Engine team does is global compositing from Landsat 7 and Landsat 5 data. And really, a lot of the techniques that I've been trying to apply are basically things I sort of remember from when I worked there. Another fellow did it. So, hey, it's much smarter than me. Hopefully, I can replicate some of these techniques. But I actually bring this up as a demonstration of a success. Borneo is an extremely cloudy country. So, basically, in this case, they've taken Landsat 7 data from about 12 years, I think, to build this global, what they call the pretty Earth image. So, it's basically at Landsat scale. So, it's 30 meter resolution. But they've taken basically 10 or 12 years of imagery. And for every pixel, they've analyzed that and they've tried to pick the representative cloud-free pixel along with some, they apply some other rules around trying to, for a green time of year versus not a green time of year. And this is actually a success. So, this is an area of the world where if you were to look around, you just would not find cloud-free images. And in fact, you often, for typical Landsat acquisitions here, almost all the images cloudy or at least there's always parts of it that seem to be cloudy. So, part of the lesson here is if you actually take a deep enough stack, if you've got enough history, enough imagery, everywhere is uncovered, at least now and then. On the other hand, if you zoom in in Google, Earth, the Google Maps in Borneo and you get down to their higher resolution data, you soon discover exactly the same old problem is that it's using more traditional mosaic techniques and wherever you go, you're still getting cloud. They've actually got a bunch of different image products here that they've kind of patched together. You can see there's this boundary here where, this shows up early here. I'm sorry, there's not the greatest greens. Anyways, you can kind of make out in a slightly clearer view that this is kind of patched together from different images and you can see some of the clouds kind of get cut off in the middle. And in fact, you see a peculiar effect here. Clearly, they've got one image product and then another and you've got a visible seam line. This is actually, so I lived in rural Canada until a few years ago. So this is actually the road that I lived on, Foy Mount Road. So it was always very disappointing for me to go into Google Earth, Google Maps. And a few hundred feet from my house, there would be this nice air photo data. They must have got from the county or the province or somebody. And then you go through a little strip of Landsat data and in fact, my house in town, after I moved into town, they only had a Landsat data for the town, which is quite tragic. So what you end up with is a mishmash of, you know, products coming from all sorts of different stars. So they're basically, they're trying to use the best data they have where they have it. So they got some really beautiful air photo data. What you're, you've got all these arbitrary transitions and holes getting filled in with whatever resolution they have. So I completely understand how this happens and it's like, given what they're trying to do, this is in some ways the best they can do. One of the things I'm hoping is given a rich enough set of relatively consistent input data from our satellites, we can, we don't end up having to have this kind of patchy effect, especially we don't have to have this, you know, varying resolutions problem, although at the cost of, we don't have super high resolution. Okay, so visible seams and different sources, quite visible. This is another Landsat derived product. So this is from the USGS. Partly, I wanted to show here that even with Landsat data, you can end up with these situations of visible scene boundaries. So basically, the scenes are collected, these are sort of low-pack boundaries here. So you're getting very different effects just on the sort of acquisition boundaries there. It's also got a terrible striping effect, but that's something to do with the sensor and they didn't obviously very aggressively fill in those holes. Or I guess what's happened is that they've got one scene with cloudy holes and then they fill in the gaps from a scene that's underneath so you get a really, you know, odd striation effect, certainly something I would want to avoid. Okay, so now having made fun of them, I'm supposedly supposed to come up with a plan that's going to produce a better composite. I wish when I signed up for this presentation, I thought I would be reporting on that. Instead, I will be reporting on my attempts to do so and with, you know, limited degrees of success and lots of things yet to work on. But hopefully, it will still be mildly interesting and give a sense of sort of an aspect of image processing that's sort of still developing. Okay, so, and in fact, I'll be talking mostly about working with Landsat data. So Plant Labs has only like a little over 100,000 scenes or so, so we do not have a global covering yet at our current operational cadence. And particularly, we do not have this deep stack idea. So my idea is I'd be working with stacks of like 5, 10, 15, 20 images deep. So we certainly do not have that catalog yet. So what I've been trying to do is experiment with Landsat data basically to develop techniques. So particularly with like a year or a year and a quarter's worth of Landsat data. So most of my examples I'll do is go through that and I'll show a couple of pictures of our data at the end, but that's not, we don't really have the data depth to do the full compositing approach yet. Okay, so the idea is we have a deep stack of imagery. We're going to go through pixel by pixel. So we carefully co-register them and co-registering them is very important and certainly one of our processing chain problems. And then, so we go take one pixel and we basically look at all the values that we have for that pixel from all the different scenes. And then we kind of sort and process through them in various ways to try and pick out what we think is the most representative pixel. And then optimally, you go to the next pixel beside it and you do that complete analysis again, independent of all the others. So it's a highly separable thing. For every pixel, you're actually trying to take the best thing that you could find and you could actually have, you know, a mishmash of clouds even with relatively small gaps in theory and you would still be able to get some value out of the data in the gaps. That's the sort of idea. I'm just going to keep it on my time. Okay. So there's a bunch of different sort of techniques and measures that we can take. So in theory, what we want to do is pick the highest quality. So there's various ways you could measure quality. If you are really just worried about avoiding clouds, you can just pick the darkest pixel and basically that's going to avoid clouds. It also avoids haze. So one of the big problems we also have is different atmospheric conditions. So there can be high-series clouds that are very thin and don't show up as like this puffy white clear mass. It's just a general lightning in the scene without really clear boundaries or you can have ground fog or other sort of ground level atmospheric effects and those are generally manifest as a lighter scene relative to other candidates from the same area. So the naive approach which is mostly what I'm going to talk about is taking the darkest pixel and then we obviously are going to come back to that whole cloud shadow issue a bit. Other categories, so for the Landsat Pretty Earth that you did at Google, there's actually a measure that was sort of a mixture of darkest and greenest. There, when you're collecting across a whole year or many years, you want to try and pick, ideally you want to pick from an optimal season. So greenest is generally the most interesting season but also you don't want to mix a green season and dry season. Other things that we're very interested in doing is there's a bunch of other sort of image quality issues, some of which are very across our sensor and some of which are very seen by scene. So one thing is we have a very severe vignetting effect, so in the very corners of our scene the image quality is very low. So I want to actually have a mask which says I prefer to use the pixels from the center of one scene rather than the pixels from the corner of another scene because we know that the center is better in the corner. You know, when if you have a little bit of fuzz on the lens or other image anomalies and things like that you can apply the similar techniques. So if you've got damaged pixels and et cetera, you can, if you can have per sensor quality characteristics, you can use that to de-emphasize those things. You could mask them out and completely throw them away but sometimes it's the best imagery you've got for the spot and it's better than nothing. But you definitely would prefer to take something that didn't have those problems. And we obviously would like to be able to mask out clouds that we're very confident of. So there's different techniques. Darkest will do it kind of automatically but if you're going for other techniques then maybe you want to have explicit cloud masks and get rid of the clouds that way. So there's also a bunch of per scene characteristics. So we would much prefer to have a scenes at well at time of day relative to close to dawn or dusk. They're going to have much better signal to noise ratio and so even if things are near dawn and dusk we can actually correct for the sun angle in terms of just an overall multiplier for the scene but the signal to noise is actually quite a bit worse at those times of day. So these are sort of scene level characteristics. Sometimes we also have different sensors which are more or less in focus or we've got other things at that level that we want to keep track of and we'd prefer to take the best quality scenes where we have them but if we only have the lower quality scenes we would rather have that in the mosaic than nothing at all rather than a gap. So some general criteria. Pre-processing. So we have of course a whole processing chain. The most important part of it is actually rectification. So I talk about taking these pixels and you want to take, we're not doing the processing for our data on 4.7 meter by 4.7 meter rectangles. You basically, if you have absolute or relative spatial error between your scenes of, you know, five meters then you're basically not looking at the same spot on the earth and all those pixels. So it's important to have a good registration. So that's a big part of our pre-processing chain which I won't go into here but was, I had actually, hopefully we'd have a separate talk on that but didn't get accepted. Okay. So in the case of Landsat we basically take the UTM image and we just reproject it all to our output global grid which is Mercator. So the Landsat, I did all the processing at Landsat at Mercator level 13 and that's basically 19 meter pixels. And luckily the Landsat spatial accuracy is generally pretty good although I get a few scenes that some, something went crazy on and nobody noticed and they really cause strange effects. I don't unfortunately have anything to show that but when we do our own data I do it at two levels deeper so at about that 4.7 meter. So I have, there's a lot of stuff about the processing chain that goes into that where I try to do it directly from raws so that I can do just one resampling step to get right to the, basically the serving grid. So the reason I'm picking these Mercator things is we want to serve Mercator tiles ultimately so it's best to go right from my raw scene in one step to that to improve, to keep minimum damage to damages. So we also go through a variety of calibration steps which are working moderately well but not perfectly yet. Okay so I have to set up, I'm going to talk more mostly about Landsat data so I'm going to give a few of the input areas. So this is that, that a small, a small patch of that area in Brazil that I showed before. So I'm trying to give this sense that it is in fact most of the imagery is quite cloudy. So I've got three typical scenes. So you can see they've each got, you know, popcorn clouds. So those are three typical scenes. There are some of the scenes for this area that are basically all clouds and there's some that are more like, you know, only 10 or 15 or 20 percent clouds. Okay so that's some typical imagery that's going into it. And in fact you can see quite distinct cloud shadows that extend quite a ways away from the clouds. So the technique is pixel stacking. So you want to take all the, so these are basically graphs of one pixel and in this case I'm graphing against time and I've got red, green and blue. So what I actually hoped to see in these was something that was more about seasonality. I'll come back to that I think in a few minutes. But anyways, it gives you the sense that here we've basically got every one of these, every one of these points is one of the Landsat layers. So you've got a limited number of data points, probably about 20, most of which are off the graph which are basically super cloudy spots. So if it's really bright it just goes off the graph and we're, that's not going to be interesting but we've got still a fair number of data points. So maybe seven or eight or 10 that seem to be useful here. I had hoped, well I think I will come back to that later. Looking at it, this way is actually not all that helpful so you can't actually decide what is going to be highest quality or lowest quality very easily in this. So the next thing we do is actually sort it. So in this case I'm sorting by brightness. So the brightest on the right which is, you know, off the graph is basically the cloud pixels again and then down here we've got various sort of usable pixel values. Now when I actually go back and I trace this carefully I can go through individual pixels and I might actually discover that these are cloud shadows and here you're basically on the fringes of a cloud, the fringes of a cloud where it's wispy, you couldn't necessarily even detect it as a cloud or it could conceivably even be hazy although in this case this area wasn't prone to, it wasn't that prone to that kind of haze. So you probably, as you get higher there, you're getting into ones that are not that valuable. Presumably though there's some sweet spot in the center which is well lit, not in the shadow, not hazy pixels. So if I do the very most naive thing and I just take the darkest end, so if I, for every pixel I basically do the sorting and I just take the bottom end, this is the kind of scene I end up with. Oh, it's really not as bright here as I would like but what you're really seeing here is all these things, these are not a real geometry, these are, this isn't a physical feature on the ground, this is just cloud shadows. I basically collected cloud shadows for my region of interest. You've got a few sort of well lit areas although tend to be on the dim side as well. In fact, I think we can flash. So this is from the base layer and it's just to show that in these clout, this patch for instance doesn't represent anything on the ground, it's just a cloud shadow. So you can kind of see that you're introducing a whole bunch of detail and that's quite meaningless. Okay, so this is basically that same graph again so I think I'm trying to make the point now that what we want to do then is instead of taking the dark end, we actually want to pick somewhere in here so the idea is instead of taking the darkest pixel or the brightest pixel, we want to take some sort of a median or percentile way through this stack and then hopefully we're falling somewhere into that sweet zone of a well lit pixel and you can also think of this as outlier removal. The clouds are basically outliers, the cloud shadows are basically outliers, we want something that's sort of representative in the middle. It also kind of helps to avoid you know some weirdly processed or other anomalies. There could be something weird that happened once to one scene and just by virtue of taking sort of a central percentile or median or something like that, you're actually avoiding a lot of those strange results. So I've experimented with a variety of different percentiles basically to collect that. So this is an example of doing it, the 85th percentile and I will say I actually first remove things that are known clouds. So in the case of Landsat, it comes with a cloud mask which is we know this is cloud, we think this might be cloud, there's a chance this is cloud and we're really sure this is not cloud. What I have done before I do the percentile selection is I've removed the ones that are, this is for sure cloud. So the rest of it, I don't want to remove because I'd rather keep a richer dataset but it's, I at least get rid of the ones that are really known clouds. So this is 85 percentile after removing the ones that were already fully identified as cloud in the Landsat metadata. And this is 70 percentile and 55th. So at the 85th percentile, you can actually still see really strange blotchy cloud shadows and things like that are coming into it. As you get down to 55 percentile, it's a little bit more normal but it's still really blotchy. The image quality isn't good here but in fact the underlying image is not that great. So as I go into it, sort of the lesson is this is not that perfect technique. I talked a little bit before about there should be this sweet spot in the middle which is nice and well-lit pixels. When I was doing these graphs, when I started doing these graphing, I was hoping it was going to look like this. So this is not real data. This is my wish. This is a, I was hoping we could do this, a nice analysis and I would be able to go through these graphs and find these breakpoints. I was hoping this is the breakpoint between, you know, well-lit and clouds and this is the breakpoint between well-lit and cloud shadows and that this would be a thing I could, given enough data points, I could actually go through and do this on a fairly regular basis or maybe for neighborhoods or something like that. Unfortunately, it was not really born out. So I basically wrote some tools so that I could poke at pixels in these stacks and see graphs and stuff like that. So here are some typical examples. So there isn't any real obvious breakpoints. It's just not working out. Here I actually took little 4x4 neighborhoods so I get a denser thing saying, well, maybe I would see it if I had more data points in these regions or something like that. It didn't really, it's basically not very much to see here. Now, part of it is also, we're taking this through all sorts of different seasons so there's a lot of different, you know, effects that are going on. So you don't, it's not like, oh, time. Ah, okay. Hurrying on. So nevertheless, even though the technique wasn't that great, I did it at the 65 percentile and I did generate an image of the world of Landsat. So this is the overview of that. Part of the challenge is just that we had to build a lot of infrastructure around actually being able to process this amount of data. So this is that. Zoom in on Portland. But now I want to, you can really start to see some problems here. So here I zoom in on a farming area and if I actually look between the Armozic and the base map, what you see is there's a lot of introduced basically garbage in the fields, a lot of detail that's not real detail. Part of that might be mixing dry and growing season and so on, but part of it is just the fact that this flipping back and forth between scenes every pixel does strange things. Okay, I'd hoped I would also find this concept of seasonality. So I did a bunch of graphing of seasons. So this is basically day number, so 500. This is Landsat eight data over about a year and a half. What I didn't see is a nice obvious one, but there's like relatively few data points. So it's hard to see patterns. I might be able to pick something out if I do enough data analysis. Okay, so percentile selection avoids clouds and haze and most cloud shadows. So that sounds good, but you get a weird mixing of seasons and there's no really obvious break point at which to do it and you end up with this terrible patchwork effect. Okay, so this is actually our imagery. So this little gap here is actually looking through to the base map, but the rest of it is our scene. So one of the points that I'm making here is absolutely calibration is important. So we're still some of this lightness over here could be hazing this, but mostly we're just still not getting all the scenes to be quite consistent. So these are basically collected in strips. So they're consistent within the strip between strips. We're not getting that great a consistency yet. And also show, we also do things like dropout where we know it's a cloud. So we can, on the edges, it's hard to know it's a cloud, but in the center we actually drop that out of the mosaic. Okay, so this is done with a thing called the pixel lapse compositor. It has a JSON configuration file. It uses GDO for reading and writing data. It's got a few different of these quality measures and hopefully more will be added. Cloud masking, we can actually also record which scene every pixel came from. So that's called the source trace file. So it is open source. It's on GitHub there. It's not that easily yet. It doesn't have a really good documentation and I hope to flesh it out with a bunch of examples using Landsat stuff in coming weeks. And here's some examples of the JSON file. So basically give a list of compositors so you can fool around with what you already want to do things in. You can give it different parameters. So this is the Landsat case. So we say first do the cloud masking. So what the top of it means, and it says do darkest. And then take the 65th percentile and reach input file. It gives the name of the file and it's five files. This is one from our own imagery. So what was different about it? It actually uses some of these sensor-based quality files. So they get passed in as well. So each set has some quality files. It's got the date and the base file. And that's another simple example. Okay. So future directions. Blending around the percentiles instead of picking one value. Actually pick a few pixels close to that and blend them. And actually Charlie Lloyd from Mapbox was saying that's what he really had to do with Landsat data. Improve the pre-processing obviously. Experiment with greenest pixel. So right now every pixel is done completely independently. And as much as I love that idea as a simplifying assumption, I don't think it's going to work. So I'm probably going to have to start biasing to select from the same scenes as your neighbors until the quality difference becomes too significant. And if that all fails, I'm going to have to fall back to do using more traditional patch-based effects and try and understand where the clouds are geometrically. But I'm also interested in collaborating with others. And part of the reason why I open sourced this and I wanted to have this talk was there are a bunch of people doing sort of compositing things like this in the world and I would love to get some sharing of experience and even code. And I plan to write some more docs and I'm also hoping to make our Landsat data in tiled form and the Mosaic public. Thank you. Sorry, I just fell back down.
|
Planet Labs is collecting images from dozens of satellites in order to build timely global cloud free mosaics at around 5 meter resolution. I will review the software components we use to accomplish this, as well as discussing challenges and solutions in this process. With luck I will be in a position to show off our global mosaics, and offer Planet Labs open source compositor software.
|
10.5446/31710 (DOI)
|
Hello everyone, thank you for coming. And welcome to this GOX T2 presentation. We'll go over the past present of future of the project. First of all, I'm Julius Samard Lacroix, and my co-presenter, Marc Jensen, could not make it to the first four GDC this year. But he should be currently watching the live stream of the presentation. So, hi Marc. And you are really missing our quite good conference. We're having a blast. And I just want to let you know that the switch room to give us a bigger room, so you are really missing something. So Marc is a developer and project manager at Theristris in Bonn, Germany. Theristris is a company that develops open source software and do projects based on open source software in Germany. And I'm Julius Samard Lacroix, like I said. I am a developer and project manager as well. I work for Matgears. We are open source software developer from Canada and offering development support and training for Mapserver and all other web mapping applications around our software, around Mapserver, everywhere in North America and a little bit outside of it as well. So now that the shameless plug are done, what is GeoXT? GeoXT is a JavaScript framework built to help developers build rich web mapping interfaces. It's based on OpenAIRS 2 and XJS. It actually enhance XJS to give it special components. XJS is a rich UI framework to build desktop-like applications on the web. So it really has a lot of cool features in it, already built in. So here we have a simple XJS, well, GeoXT application where you have a panel for the map and you have a grid containing your results and you can have interaction between the two. So if I select a feature in the map, it will be selected in the grid or if I select a feature in the grid, it will be selected in the map. Another more complex example would be an application where we have a tree, which is already built in in X and that we develop a layer tree in GeoXT for the mapping components. So with that, we can draw new features or edit existing one. And we'll have all the features inside a box. Of course, this demo is online and available to the public. So you can sometimes have a few surprises of what you can find here. Anyway, so GeoXT have been a really cool tool to build a very, very powerful web application. It has been started in 2009. The first discussion happened actually in a FOS4G, like this one. It was based on EXT3. And at some point here, we have the website, GeoX.org. So at some point in 2011, Sansha, the company behind XJS, released a new version of EXT, EXTJS4, which was really, really, really great. It came with a lot of great new features, like a new MVC background architecture. I'll tell you a little bit more on that. But it's a new way to develop, or a cool way to develop applications. And it was now available in our web mapping application. There was a dependence management. So you didn't have to load the whole library just to have a few features. There was functionality to have single file builds, charting, and things like that. But the sad thing is that it was backward incompatible. Of course, us developers wouldn't be stopped by such puny arguments, backward incompatibility. So what we did is that we organized an international code sprint. We did a lot of solicitation to our clients and Potashian clients to get money to pay for our developers to get together in Bonn, Germany during a week and develop a part of the current GeoXT application, or library, to the new version of XJS. So it did happen. And for those who are not familiar with code sprint, what happened is that you take a bunch of developers, there was around 20 people, like I said, in our case, you get them in a room, close the door, and provide them with food and coffee and beer. And at the end of the week, they would have worked from 8 to late every day. And at the end of the week, we got an alpha release, public examples working with XJS4, documentation, and a brand new library that was working very well and that was ready to be used in real projects. So it was a huge success and a lot of fun, of course. Now, what do we have? GeoXT2 is there. GeoXT2 is ready to be used. It has been used already in several applications at our company, and at Camp2Camp, at Terrestris. All the companies that participated in the code sprint did use GeoX2 in a production environment. So it's there. And you should use it because it's a really great library. The current version used the latest GeoXT4 version, the latest OpenLayers 2 version, and we are still working to make it better. So what's new in this new GeoXT2 library? We have all the new GeoXT4 paradigms, so the MVC pattern architecture. Those who are not familiar with MVC, it's a way to develop software where your data is independent from the user interface and their user interface is independent from the tool behaviors. So your data is encapsulated in a model, the M, and all the change to the data is propagated to the user interface, which is the view. So the V is the view, which is the user interface that you will send to your user. Your user will interact with tools who are controllers. So that's the C. So the MVC pattern is you will have your data in models, user interface in view. You will manipulate your data with controllers that will change their state, that will propagate it to the view, that will propagate it to the user, the user will then use controller to change the models. It's a way to structure your code base that makes the management of big projects a lot and a lot easier. I'll show you an example a little bit later. In the GeoXT4, we also have build tools. So now with the dependency management and some script provided with HDJS4, we can compile applications to make them a lot much smaller. So they are also a lot much faster. So for example, the small example that we had here that will display you a little bit later, the code base is around 5 megs, but we only use around 250K in that specific example because we don't need all the features. So it is saving a lot of space, a lot of bandwidth, and it's much faster in the browser as well. There's also easier teaming. So there are several different teams that you can use in GeoXTJS4, and they are a lot easier to customize by yourself or by your designer. The API documentation is built from the code base, and when we did a code sprint, we made sure that all our functions and all our classes were correctly commented inside the code. So we allowed us to dynamically build the API documentation from there. So we have a nice... Here it's an XT4 application, of course, where you can browse the source or the documentation of GeoXT, and that was generated on the fly, and it's regenerated each time we make a change in the code base. There's also AdLest testing and continuous integration. AdLest testing means that we have script on the command line to test all our changes. So before committing a new feature or a bug fix, we can easily rerun all the tests that we developed to make sure that our changes do not affect the rest of the library. This means that the product is a lot more robust than any other libraries like it. The website is currently hosted in the GeoXT GitHub, but at some point we will, in the near future, simply replace the GeoXT1 website, geoxt.org, with the GeoXT2 website. Some examples now, yeah. Okay. Here's a few cool features that we have in the XJS4 and GeoXT2. Who here? Raise your hand to people who are developers or JavaScript developers. Cool. There's a few of you. For those who are not, I'm sorry, I will show a little bit of code here. But if you are not interested by it, just take some notes while I'm speaking or look somewhere else. Okay. Here, an example of the print capabilities that we have in GeoXT2. So here we have the extent of the printed map we will do. We can change it and the orange rectangle will adjust to the scale, to the right scale it's supposed to be. We can also rotate it, rotate it, sorry, rotate it like that by 45 degrees or less. So if I trade this PDF, I will get a printed map from a template. If you've been to just see a card presentation on MapFishPrint, that's what is used in that case. And the template is controlled by the developer. But the widget to select the printed area is really the key point here and what is really interesting. So if you've noticed, the north was not north because I did rotate the printing extent. Then we also have Legendary. That is, those images are coming from WMS GetLegend Graphic requests. They are not specific to the application. So if you add new layers, the icons will automatically be generated from the server. Here we have a tree where I can add or remove, dynamically add or remove layers in the map and they will be automatically added to the Legend or the layer tree. I can move them around and they will move in the map as well as in the tree. If we go down, yes. Tugging the visibility closes the layer or removes the icons. Can show or hide in the Legend but keep the layer in the map. Or simply change the icon. Here it's the icon that comes from the server, from the GetLegend Graphic. But I can force it to something else. All that is done with a few lines of code. Here we are creating a map, adding a few layers. That's regular open layers code. Creating a map panel. The Legend panel is simply a definition of the object that will contain the div that will contain the Legend. As you can see, it doesn't contain any layer definition because they come from the map. They are really tied together. Here we have a more complex tree. I can remove or add layers, change their state, and have a distinction between base layer and overlays. The tree model is quite simple actually. We simply define if it's a base layer container or overlay layer container. This will give you a group by overlays or base map layer tree. You can also define filters if you want to have specific layers in the tree or not. Some sliders. Here it's a passive slider that updates the map opacity. When you read it, here it's an aggressive one that updates the visibility. It's really useful in the second example where you have an autoreal layer beside it so you can mix both. OJC capabilities is just a cool feature where you can have a list of all your layers that comes from a WMS get capabilities request. The grid here is automatically generated from the WMS get capabilities. I can open previews of them by double clicking them. It's already simple stuff that we have done with GXT2. Before going to the filter, I will just show you the small MVC application. It's a really cool feature actually. We have a map. At the back of the room, do you see the triangles? Those are montains. They are color coded based on their height, on their elevation. So I have a series of columns in a grid that are matched with the... If I select a montain in the map, it will select in the grid. The grid, I can control which columns are available or not. For example, here I will remove the ID column. I can sort my columns dynamically just by clicking the column header. Or I could order them from south to north or from east to west. The thing that is cool with GXT is that you can edit your values. It will reflect directly in the map. Here the montains change color. The graphic did change as well. I can also select from the graphic to be able to explore the data. This is really, really cool and really, really easy to do. What's the future of all this? We're going where there's no road. In the near future, there will be a point release of GXT2. It's mostly bug fixes and continuous maintenance of the library. We will do a lot of advertising and visualization to make people use it. We want people to use it to get feedback. We will take over GXT.org again. In the not so near future, there's a lot of interest in using other mapping libraries because OpenLayers2 is becoming more and more obsolete. There has been already some work to test with Cliflet and OpenLayers3 instead of OpenLayers2. It's in the pipeline. Same thing, XJS5 is out now. We should definitely look at that. It's not as hard to use XJS5 as it was to use XJS4. There's already been some work to port GXT2. It should take a lot less time to port. There's actually already a road map for XJS5. There's maybe a third or a quarter of the tickets already fixed. It's doable. We'll do it. But with your help. We need your help, please. There's also already some work in the pipeline to use other mapping libraries. It's going well. Questions? Yes, please use a microphone so Mark can hear you. What do you recommend for training resources? Basically, I am going to be developing with GXT in the very near future. No prior experience. What's the best way to learn it? We have a lot of examples here. You can learn from examples. There's also a mailing list that you can register to. All the developers are there and fairly quick to respond. Otherwise, if you are looking for training, there's a bunch of company that offers training and support that you can contact. From an open source perspective, feel free to ask questions on the mailing list and to try the examples. Questions? Yes. So all your examples right now currently use OpenLayers 2? All the examples are currently using OpenLayers 2, yes. All the work to use other libraries are in different branches than master. Thanks. So the example you showed about editing. So that front-end editing would communicate with REST API, I suppose, and then to persist changes. Was that the case? It's not the case in that example because all the data is on the client side. In the different projects that we did, you can use either a custom REST API or WFS standard server. There was another question in the back. I've used GeoX2 to develop and that's great. I really like it. But have you made a conscious decision to stick with EXT? In the beginning? No, now going forward, like you've stuck with 4 and you'll port to 5. Yes. You thought about maybe changing using a different framework so having GeoAngler. We've had a lot of discussion about that. Some developers decided to pass to something else. Personally, and at our company, we decided to not do everything with EXT because it responds to a really specific problem which is creating big, large application or dashboard application or desktop-like applications on the web. If you want to have a small web application with only small pop-ups and a few interactions, then it may be easier to use something else like Bootstrap or jQuery UI. It's really a matter of personal preference. We will continue to use XJS and GeoXT in big projects. We will continue to use other libraries as well in projects that where they fit. Thank you very much. Thank you.
|
GeoExt is Open Source and enables building desktop-like GIS applications through the web. It is a JavaScript framework that combines the GIS functionality of OpenLayers with the user interface savvy of the ExtJS library provided by Sencha.Version 2 of GeoExt (http://geoext.github.io/geoext2/, released in October 2013) is the successor to the GeoExt 1.x-series and is built atop the newest official installments of its base libraries; OpenLayers 2.13.1 and ExtJS 4.2.1.The talk of two GeoExt core developers and members of the PSC (Project Steering Committee) will shortly present the history of the project with a focus on how an international code sprint back in May 2012 lay the foundations of the 2.x-series of GeoExt. The current version will be presented and and we'll discuss new features and important changes for users of the framework. Especially the following aspects will be portrayed:- Usage of the new classes- Compatibility with the single-file build tool of Sencha- Integration into the ExtJS MVC (Model-View-Controller) architecture- Better API-documentation- Easier theming of ExtJS/GeoExt applicationsAs both of the base libraries are about to release new major versions Ð OpenLayers 3 and ExtJS 5 are very near to be being released in stable versions Ð the last focus of the talk will be the future development of the GeoExt 2 framework.The project has already pre-evaluated the possibility of supporting more than just one mapping library, so a future version of GeoExt might bring support for OpenLayers 3 and/or Leaflet and is likely being built on top of ExtJS 5.
|
10.5446/31711 (DOI)
|
A tile-saving component that's been added to MapServer two years ago officially, I think as Steve said, three maybe now. Just a short introduction about what's a tiling server. So tiles are actually pre-computed usually 256 by 256 pixel images that allow first access to static data that's been cached somewhere for rapid access. A tile is aligned to one grid, which is basically a subdivision of the world along for a given projection, for a given extent, and across different resolutions. Tiles contain data that has been rendered by some third party service, in usual cases not a WMS server or MapServer itself or Google data source or whatever. Once a tile has been generated by this server, it's split up by the tile server and stored in a cache backend, so we'll be looking at all the available cache backends we have available. And then that tile can be served back to clients over the web, respecting different services or standards. MapGash itself is actually a tiling library more than a tiling server, and it has frontends that's tied either to the Apache web server, as a fast CGI instance, as a native Nginx module, or as a native Node module. It's versatile in the sense that it supports multiple cache backends, it supports multiple client protocols, and it has advanced tile management features which allow you to pre-generate seeds beforehand, so that's called seeding, and then recompress images to gain image weight, interpolate image data that's not present from lower levels. It's written in native code C, which is fast and lean, and there's a tiny demo interface that allows you to quickly set up a service and then just copy-paste the JavaScript code that's needed to use that server directly in your development and client code. The code is now four years old, started in 2010, and it was officially integrated as a map server component, the map server suite in 2012. So, first talk about the protocols that are implemented and how you can request tiles from RAPcache, so it uses either standard XYZ addressing, KML super overviews to view tiles in Google Earth, typically, and it has also a WMS wrapper to create, to read WMS from tiles, I'll go into more detail for that. So, for standard tile addressing, it supports the TMS protocol, which is the OSGO protocol for addressing tiles. It supports WMTS in either RESTful or key-value pair modes, which is basically very similar to TMS but vetted by the ODC, supports addressing by virtual earth quad keys, map guide addressing, and standard XYZ addressing. It also supports WMS get-map requests, which means that it will compute any get-map requests for given size or layers and use the tiles that it has in cache instead of using the native data itself. So, in most cases, this is usually much faster than hitting a WMS server itself because you're using pre-computed image tiles as the source. So, it responds to untiled requests by assembling those tiles from the caches. It can either assemble them vertically in the sense that if you're asking for multiple layers, then it will stack multiple tiles, one in the other, and fusion them into a single tile, and also horizontally, which means that if you ask for 1,000 by 1,000 images, it will stack four tiles once next to each other to create that image. It also acts as a kind of, I put ODC in pattern existence because that's the main usage as a proxy for other services. So, in that case, it can be the front-end to your services. It will intercept the requests that are tiled or that can be served from tile caches. And then you have a set of rules that can allow you to forward other requests to other servers. So, you have a kind of expressing language with regular expressions. This means you can send WFS requests to a WFS server. You can send WFS requests of a version 1.1 to 1 server and then 2.0 to another server. You can also serve static data from another server. Miscellaneous features, which I've just put together here, it creates the caching HTTP headers so your clients don't keep requesting over and over there again for the same tile if we allow it to live that long in the browser's cache. It can expire tiles more or less automatically. So, basically, if you define that a tile in your cache can only be one hour old, it will automatically delete that tile if it's requested after one hour and recreate it back from the WMS source. It will report errors either as a message, either as an empty image or just basic status codes inside your HTTP requests. There's support for meta-tyling, so that's creating multiple tiles from a single large get map request to the source WMS server. You can watermark the tiles with your logo if that's what you want. And it can also up-sample tiles from lower level, zoom level. So, basically, if you have 10-meter raster image, but you want to serve them up to Zoom level 20 or 21 for Google Maps, there's no use caching those low-resolution tiles. It will just recreate them from the low-resolution data that's available. Once your image comes back from your WMS server, it can be recompressed and optimized before being stored into the tile cache itself. This means this is useful if you're using raster data and you want to avoid double JPEG, compression, decompression once from the WMS server and once from the tile cache. So, you request PNG from your source WMS server and then compress the JPEG only once when you're storing to your tile cache. It can also allow you to be aggressive on the compression levels you're applying to the tiles when you're storing them and you're preceding everything. Or you can also choose to use a less aggressive compression for doing on-demand compression of image tiles. So, less CPU use but at the cost of higher weight images. It supports multiple image formats that can be stored or returned to the clients. So, the two basic image formats used on the web, so PNG on which you can play with the compression level that's applied to the PNG data. And you can also apply quantization to the data to store those tiles as 8 bits by choosing the correct 256 colors that best match the image that's being stored. For JPEG, you can also play on the compression or quality level and you can choose in which color space the JPEG compression takes place. Usually, YCBCR gets you better compression results. There's also a kind of fun feature which allows you to choose, let map cache dynamically choose in which format it's going to store its tiles. It's going to store JPEG if it sees that the image data is fully opaque and then PNG if it sees that there's a transparency in it. So, basically that means if you have a satellite raster that doesn't cover the whole world, then all the paths that are fully covered by the raster will be stored in JPEG to reduce the bandwidth. And then the images on the side which have no data values will be stored and returned as PNG so you can overlay that with other layers on your clients. It's also try to intelligently handle empty or uniform tiles. Depending on the caching back ends that I use, it will use different strategies once it sees that the tile contains no data or just one single color. For caching on the disk, it will use symbolic linking to one file. It can also, for other cache back ends, just store the color in the cache instead of storing the image style and then dynamically recreate a PNG image on the fly at the request time. Returning a one bit PNG of a uniform color so I don't remember exactly the size of that PNG file. It's something around 180 bytes, something like that. If you are in the case where you know your cache is already fully seeded, it will treat a tile that's not present in the cache as a fully transparent file and return that to you. Now we talk about image formats. Map cache supports multiple grids, not only the Google Mac at all ones, so you can define multiple grids per tile set you are serving. You can serve them up in WHS-84 for overlaying on a globe and Mac at all. It automatically detects, well, you can configure it to respond to grid aliases, so the eternal problem between EPSG 3857 and the deprecated 9131 is handled. It can handle non-PSG codes in that case. Inside a grid you can also configure a tile set to only be configured for a certain extent of certain resolutions of a grid. So just say, I'm using the Google grid but I only want the first 10 levels. And then in one day if you want to change that and switch to a higher zoom level, you just have to change that and not have to recreate all those tiles. Same for the restricted extents saying, okay, I'm only going to cover the USA. And then switch to the whole world and not have to recreate all the tiles you've already created for the US. Cache backends, which is basically the most important part of the tile cache. So a cache is a back end that's able to store tile data for a given XYZ, so that's really the basic part of it. To be able to become a cache, your back end has to support four basic operations, which exists, you know, if a tile is in the cache given a given XYZ. Get back image data for a given XYZ, store data for that one and delete data. There are some specific hacks or features that are only available for certain cache backends. So now I'll go into more details about each cache back end. This cache is stored tile directly in the file system. I put here mainly for development testing or small tile sets because storing millions or billions of tiles on the file system usually very quickly gets into the limits of what the file systems are designed to do. So the pros about this cache is it's very simple to set up. You just give it a directory in map cache will store its tiles in it. It's relatively very fast. It can support detection of blank tiles. And you have the option to store tiles given different layouts so you can reuse existing caches that you have available from other servers. So it's by default it's going to use the layout used by the tile cache created, which you can also read caches created by ArcGIS. You can supply your own templates to store given XYZ. And the cons, it's difficult to manage that large number of files. It's difficult to get statistics about how much base is occupied by those files. It's difficult to copy them from one place to the other. You hit file system limits depending on the file systems of course, but in the usual case you run out of iNodes. You run into too many files per directory, whatever. And you may also waste storage space given the file system block size. So if you have a block size of 4 or 12 kilobytes and you're storing 128 byte tile, then you still have 4 kilobytes that are occupied by that tile. It supports SQLite caches, so this is just storing the tile data as a blob in an SQLite database with the XYZ columns. So the pros about this backend is that it's a single file that's going to contain all your tile cache. It's easy to copy, move over to another server. You can extend it to support any schema you want. Well, supposing it stores data as a blob and you have XYZ columns. So if you have existing SQLite files with tiles in them, you can just plug map cache into it and it'll read the tiles from it. And it's efficient in disk space in the sense that only the space actually occupied by the tile is going to be taken on the file system. The cons about this backend is that you may need some tweaking to do past advanced variables to SQLite when you're creating databases when you have more than one terabyte of data. Now, remember what's the exact limits for that one that's around one terabyte, I think. And another problem is that SQLite isn't designed to handle multiple insertions concurrently, which means if you have multiple map cache instances that are trying to push tiles inside the cache, there's a lock on the database and you slow down the insertions very noticeably. It supports a third-party memcache server as a backend, so memcache is for storing and transient data, it won't resist the reboot. Ideal for temporary data, which for forecasts, for sensors, whatever data is only valid for limited amount of time, or that can be easily recomputed. One of the pros of these cache backend is that you can distribute the load between your different map caches instances and memcache server. And memcache will do automatic pruning for you of your caches, so the tiles that haven't been used for some time will get pushed out of the cache to leave more room for the new tiles coming. The cons is that as it's memory-based storage, you have limited storage available unless you have very deep pockets for lots of RAM. GeoTift caches are caches that are used only for raster data in JPEG. You can specialize to store satellite imagery, so it's going to store the image tiles directly inside the tif file, and memcache will read inside this tif file for the encoded data. You can choose to store a given number of tiles per tif file, so if you're storing 2,000 by 2,000 tiles and you're storing 4,000,000 tiles inside one tif file, you're reducing the number of files you have to store in the file system by that factor. It allows you to either create those tif files by a third-party service as long as it's aligned to the grids you're using, and to determine, and it greatly reduces the number of files you have to store in the file system. The limitations of this backend is that it's limited to JPEG data, so you can store images with transparency or PNG data, only JPEG. That's the limitation of the tif file itself. You can't write concurrently to a single tif file, so there's a big lock around the writing operations, so you know two processes write to the same tif file. If you're doing updates or deletions inside the tif file, then the tif library won't reclaim the space that you've removed from time. If you're doing lots of updates, then that's not the format to use your tif file, it will continue to grow and grow. This is a new cache that's added to the version that's coming out, so this is REST caches to store tiles on a third-party server that supports REST protocol. So it's simple, get, put, and delete on a URI. By itself, this probably wouldn't be very useful, but it comes in handy because they are authorization hooks to hook into popular cloud storage providers. If you have cloud credentials, you can store tiles to S3, Microsoft Azure, and Google Cloud Storage for your tiles. Once you're storing your tiles in there, there's a cost-benefit ratio to take into account, of course, because storage is not free on those services. You have to understand that you usually have to run your map cache instance inside the same infrastructure as the caches themselves, given that you have bandwidths cost to take into account, so you don't want to pay the bandwidth between your S3 storage and your server. Yeah, that's it. And unless, of course, unless you have a fully-seeded cache, in which case you can just point them to the URLs of your REST endpoints, usually it's needed with map cache instances running on EC2, basically. There's a cedar that's shipped with map cache, so the process of ceding is just a pre-generating tiles at one point, so they are available at high performance until then after that. So the cedar is a multi-paradise or multi-process, which means you can have multiple instances running at the same time to take advantage of multiple WMS servers or multiple CPUs on the map cache instances. It works in drill-down mode, which means to take advantage of the file system caches on the WMS server, so instead of just looping through the tiles by XYZ naively, it's all drilled down, so you're always staying in the same area of data on the WMS server. Usually you speed things up a lot. You can seed a subset of your tilesets if that's what you want, only for a given dimension. You can regenerate old tiles that are older than a given date if you know that from one date backwards you need to regenerate. You can restrict to the zoom levels you want to seed or the extent, and you can also restrict to arbitrary geometries with the OGR expression syntax to extract the geometry, so basically you can say, okay, I want to seed level 18 on the polygons where my population density is bigger than a given value to pre-generate where people actually are going to be using your service. There's also pruning mode, which will allow you to delete tiles that also match these given criterias. I talked about dimensions for the seeder. Here's the dimensions here. Basically for a tileset you can store multiple versions of the image data. Typically, if you think of forecast data, then you have a date for a forecast, and then you want to store the tiles for a given date. For a forecast, actually you have two dates. You have the date when the forecast has been made and the date for which the forecast is valid. This is supported. The typical dimension could be a client ID if you need to create tilesets that are different and restricted by extent for a given client ID or for elevation for temperature maps, whatever. Dimensions can be expressed as allowed values, intervals. That would be for elevation or just a regular expression that must match. We've also added support for time dimension. In the case where you have an external database that provides you the available time stamps that you can serve, you can either pass an interval, start time and end time to map cache, and assemble the tiles that correspond to that interval. That can either be done horizontally, if in the case of satellite images where you have multiple scenes and you want to create a mosaic of all the images that have been created in 2013, for example, or it can also be what's coming next is an animation. You could animate also with creating an animated gif of your tile over time. For that to work, you must supply a database of the available time stamps because what you get as an input is just an interval, so you have to know which are the individual values that correspond to that interval. Future work, things that are in the pipes for upcoming versions, maybe native Google sources so we don't have to go through WMS servers. I just read directly from Google from the raster data. Advanced cache management, so that would be failover caches or redundancy if tile isn't present in a cache, then look into another one. Or if a cache backend failed and use a backup cache backend. Or be able to move tiles from one cache, one slow cache to a faster one, so move from file-based caches to memcaches. And last point also that needs to be addressed rather rapidly is being able to store more than just image tiles, but vector tiles, and either you just grids or an upcoming vector tile spec. And I'm over now, so if you have any questions. Thanks. I have two questions. The first one is about the duty of cache. Is this similar to the GDAL-adoo command where you add like overviews to a TIFF file? It doesn't support overviews, for now it only supports. So you'd have to create one TIFF hierarchy per level, per zoom level. But if you create a tile TIFF, so if you pass the LCO tile equals yes, and compression equals JPEG, that's when it's going to be used. Okay. The second one was about the, you know, in the cedar, you know, I used a tile cache and I used also a map proxy. And this tile cache, for example, had the problem thus, some of you requesting a tile from remote WMS, it didn't handle any errors, and with map proxy you could have a configuration where it would redo the request again when that failed. Is there anything available in map cache? I think there's some code that went in to be able to resume a seeding after an error so you can start off from where the error happened. So it'll fail in that case, and once you fix your WMS server, it'll restart from where it stopped. Okay. Thanks. Seeding by geometry. Will it use the bounding box of the geometry or actual clipping of tiles which would intersect it? The actual clipping. And another one, vertical assembly. What kind of mixing options are available? Just plain old screen, multiply, that sort of deal? Only screen. Thank you. Yeah, I think other compositing options might make sense. The seeder is multi-threaded, but which back end do you recommend for doing multi-threaded seedings since a lot of them do lock on the back end? Well, not SQLites in any case. Memcache would probably work well in that case. And well, I'd like to investigate into modifying the seeder so you can push tiles to the back end in a single thread. Could the other option be having multiple SQLite files and having the seeder write multiple SQLite files? That could be another option also. Any other questions? All right. Thank you. Thank you.
|
MapCache is the MapServer project's implementation of a tile caching server. It aims to be simple to install and configure, to be (very) fast (written in C and running as a native module under apache or nginx, or as a standalone fastcgi instance), and to be capable (services WMTS, googlemaps, virtualearth, KML, TMS, WMS). When acting as a WMS server, it will also respond to untiled GetMap requests, by dynamically merging multiple layers into a single image, and multiple tiles into an arbitrary image size. Multiple cache backends are included, allowing tiles to be stored and retrieved from file based databases (sqlite, mbtiles, berkeley-db), memcached instances, cloud REST containers (S3, Azure, Google Cloud Storage), or even directly from tiled TIFF files. Support of dimensions allows storing multiple versions of a tileset (e.g. one per customer), and time based requests can be dynamically served by interpreting and reassembling entries matching the requested time interval. MapCache can also be used to transparently speedup existing WMS instances, by intercepting getmap requests that can be served by tiles, and proxying all other requests to the original WMS server. Along with an overview of MapCache's functionalities, this presentation will also address real-world usecases and recommended configurations.
|
10.5446/31713 (DOI)
|
Yeah, two people presenting is going to be twice as bad as one, so get ready for it. Mainly, we're just going to be going over a collection of somewhat new features, somewhat old features, things that hopefully make your Map Server experience better, things that we've learned over the years, things that work well for us. If you have questions, we kind of don't want to wait until the end, speak up, ask them, we'll repeat them for the live audience and the recordings, but make it an interactive presentation. Don't hesitate to say something if something's confusing or you have a question or you want to make a comment. Yeah, we should introduce ourselves and give our background. Okay, well, who are you? I'm Jeff McKenna. And I've been sort of using Map Server. I consider myself a power user on the user side of things. I kind of came in into the game with heavy hitters like Daniel Morissette and Asifah, the godfather, Yohann Wilson. And in 2000, and I've kind of focused on more the, like I said, user side, so documentation and installation, installers. Yeah, so that's me briefly with Map Server. And I'm Michael Smith. I'm with the U.S. Army Corps of Agents. We've been longtime Map Server users since 2001, I think. We push a lot of data through Map Server. We're big volume users. We run on classified instances, unclassified instances. The DOD uses it extensively. It's all over the Army Corps of Engineers. It's tried and true. It's tested well, and it runs in Unix, Windows, and Solaris Spark for us. So it's one of those nice pieces of software that we can deploy anywhere. All right, first slide. Okay. A lot of you are neodeographers, want to get your geodason data into open layers or leaf led and other things like this. And this is something that Map Server does very well, very easily. There's a feature that was added, I think, in 6.2 or 6, for OGR output formats. So you can use a variety of different, whatever OGR supports, you can output through Map Server through the WFS interface. And one of the options there is geodason. So all you have to do is put in your map file an output format definition for geodason. And then in your WFS request, add output format geodason. And instead of getting XML, you're going to get nice formatted geodason ready for your open layers or whatever type of implementation you want to do. So we have a live demo of that? Of course. Because I think you said we're crazy, aren't you? Yeah, we're going to do it live. If I can find it. Always better live. Yeah. So without the output format, we get back XML. This is your standard WFS request XML data. And we'll just go into the URL. Geodason data. Just as simple as that. So that's a regular CGI call to Map Serve with an open format, right? Yep. Set to geodason. And you can easily do things like output to zip shape file for download, output to file geodatabase, whatever OGR output format you want to do. Can you put a Vbox on a WFS request? You absolutely can. You can add complex things like D within, you know, text like searches, full text searches with regex comparisons. Any kind of WFS operation can be combined with this. Okay. Next. Another thing that was added fairly recently, I don't even think it's in the documentation yet, but it's been in 641, is the ability to use scale tokens. This is kind of similar to the Scribe UI syntax for allowing you to specify multiple different data sources for different scales. But here's an example where we have, we had six USGS HUC codes in our map files. And by simplifying it and using these scale tokens, which are an extension of the runtime replacement syntax, we're able to, based on the incoming scale values that Map Server computes, it will automatically substitute different data sets, and in this case also a different column value for the labeling for this data set. So we've been able to turn six different map file layers into one just by using this new scale token ability. Cool. Hey, it's your turn now. All right. I get to talk a little bit. So yeah, what's really important obviously is getting more information from Map Server through debugging and using the keyword debug. And so this has been around quite a while as I understand it, but we sort of enhanced the documentation around Map Server 5. We put some time into sort of, sort of, in capturing all these different tricks into one document. And basically it comes down to using the debug keyword set at the map and or layer level. That's important. You can set it to the layer level. There's different, you know, values, one to five, being five, being more information than one. You can also even write inside the map file, get debug information from G to OGR using the CPL debug. These are sort of tricks that low level developers in Map Server know, but before sort of the FIBLE release with the enhanced documentation, it wasn't really captured. So that's a nice trick to keep in mind. Map Server users, mailing lists, followers will see me every second day talk about the importance of shaped IMG and like a broken record. Well, have you tried it at the command line? So shaped IMG is a great power tip to, pro tip to debug your map file. The name of this command line utility is sort of misleading. Steve shakes his head, but, you know, it should be something like, it means more something like map to IMG. So the concept here is you pass it a map file, you give it an output image format that doesn't exist a new name, and then you set a debug level. In my case, I always use dash map underscore debug three, and because I want to see nice information such as speeds like layer draw speeds. And so if you can make sense of that, maybe Mike can highlight maybe something like layers. I see layer six, 2.2 seconds. And so you, the concept, the idea here is if you have a map file with hundreds of layers, you can really narrow it down to a problem layer with, you know, more than a few second draw speed. So that's shaped IMG. Mike's, and Mike is also showing that you can actually pass layer names to shaped IMG, which is very nice. So you can just call and see draw speeds for one layer by name. You can also pass extents, right, dash E. And Mike might have it in his history. So there's just dash E with a bounding box. And then you get the speeds at that scale. And those are extents. And this is pretty critical when you're developing map files and you're testing map files to verify, you know, that your map is performing well. We do this all the time. We generally have regression tests that we're running internal on our production maps to verify that there's nothing going on in layer timings that, you know, something that we know should take one second to draw and display doesn't all of a sudden change to six seconds or 10 seconds because somebody dropped an index on some data set somewhere that we don't know about. So this is something we monitor rather heavily. I think there's one more track that we want to mention. I think. Oh, yeah. Is it for variable substitution in the map file? Variable substitution in the map file? Yes, definitely. It's a section called runtime substitution in the documentation. And one thing you do have to do now is... Is it command line? You mean like shaped IMG? Oh, no. I don't know that it does support at the command line. Like passing, passing the parameter. Let's see what you mean. I get the concept. But we're about to show you how you can pass it at the command line. The shaped IMG does not, but you must be a plant because... Wait, that was my... Oh, okay. Sorry. Go ahead. I got all the glory. Give me a slide. Give me a slide. Oh, sorry. No, but I know what it is. So a nice trick is map serve. Often you'll maybe... Not often. Once in a blue moon, you may crash map server in your web server. And you want to, or I often want to take out the, you know, your debugging so you want to call it without a web server. So the trick is to actually pass the dash NH parameter. So no headers. So map serve dash NH. And then put your query string. Pass your query string through that. And hopefully you can see the error. You can trigger an error that way at the command line and then hopefully run it through a debugger. So here's an example. Map serve dash NH with a full query string. And you could pass your runtime parameters in the querying string here. And it's just going to run at the command line without the web server involved. So that's a great tip. And I'll just keep that in your back pocket. Map serve dash NH. And if you're going to be doing debugging and GDB or other things like that, that's typically how you want to invoke things. Okay? I'd like to. Because of the name? I think. You know, it's super handy. We just need a champion like, I guess it could be me. Like just someone who really. You know, we could just have it. Not. Two names. We can keep shaped IMG and have another one that's called mapped IMG. Yeah. Yeah, I agree with you. Okay. It is. That's a good time. But it's kind of late in the seven process. Okay. We must move on. Okay. So, so I added a big change in 6.60. So this, I guess a couple years ago. But this really for users and people that are previewing their map files such as running through the command line. You know, we create a map. We add a layer. We want to see it live. We want to see the map image. We want to zoom in. What were we doing before 6.0? We made it had a local CGI instance from 2003 or something. That's what I was using. You might have had your own open layers code template that you call. But since 6.0 embedded into native into map server, you're able to call an open layers instance. So, the idea here is that you're already calling in CGI. You're passing your map file. You say, mode equals map, template equals open layers. And that triggers the call to a remote open layers template. Which actually is remote. You can see that it does actually need internet. So you're not fully local. Help me out here, Mike. Yeah. It reaches out to mapserv.org to get the correct open layers JavaScript library. Which is getting a little old at this point. It's an open layers 2.1 instance. Probably in the next 7.2 or 7 something release. We'll update that to open layers 3. But I've given workshops before and sometimes I'll hit a wall because everything's local. I've got everything backed up. I think it's going great. And then I go, oh, I'm going to show a nice viewer and it's nothing. I get nothing because I have no internet access. And I forgot to maybe point to a local open layers template. In the map file, you can configure it and point it to your own open layers JavaScript instance and use that. It just doesn't do that automatically. You have to specifically put in the configuration values to do that. So just be careful and be aware that out of the box it is pointing to an external template. Next. So just here's a demo example of the open layers viewer. So here you can see we're just calling template equals open layers and there's a mode equals browse and there and it brings back this live map from open layers. Julian? It does also work with WMS. It does also work with WMS, yes. Cool. So this really changed things. I really think in the last few years this has been a great edition, excellent edition. I don't know how much work it was to implement, but I think whoever, like, I really appreciate that this went into the software. It's really handy. Okay. And this has been around the next topic, the next pro tip. Really is includes and many of you in the room might have already known about them. But Mike and I want to stress. If you don't know about includes and you're not using includes, this is probably the most important pro tip in this whole presentation. If you're not using includes, you should be using includes. Fair enough. So it's been around, I think actually off the top of my head I didn't look it up, but I think it's around 410, it's been around for a long time. It's an include parameter in the map file. You can actually hear the idea is that you can include any part of the map file. So typically I myself might use layers for this. So what would have been in a map file, a long layer block, is replaced by one line, include and then a layer name and then an include file name. So this is any time you have something you're going to be using twice, two, three times, four times, a hundred times, this is where you want to use includes. Standard connection strings, standard metadata, standard pass to data sets, layer sets that you're going to be using across different map files. These are the kinds of things you want to be using include files for. So you're not replicating the same stuff over and over again. And when you make a change, you make it in one place, it affects all your map files that you wanted to affect. Yeah. Yeah. So one thing, I think using it, but the only thing that I didn't like was it was that the error message is I don't get to correct you. Right. And we talked, right. It's the question was if you're using includes, when you get an error message, it's going to give you the line number of the include file, not where in the include, which include file it is. So you can't really isolate it down. That is something we still need to do. It still exists. It still exists. Yeah, it is a problem. And maybe we can take care of that for seven. It's been hanging around. It's been hanging around. Just I think maybe it's on us, or people like me who just didn't maybe report it more vocally, vocal enough and follow through. Right? That is the one problem. Right? Yeah. Yeah. But include files can also be nested. And it doesn't matter which extension they use. Map files have to use the map extension for a security reason, but include files don't. There used to be a limitation. Correct. Help me out here, guys. There used to be a limitation that you could only nest. You can only go five deep, like include, include. Right. No, it's actually deeper than it's gone. I think the limit was limited. I thought it was fully removed, but I just wanted to comment. I think it was an evan thing. Hey, be aware of that one anyway. Yeah. And we can just go over. We can just skip this slide. These are just some examples of where you can include. And that's an example. So you can see your map file can be really simple because you can just include everything that you want to include into your map file. The paths, your output formats, your web metadata stuff, all the layers you want to have, keep your map file simple, have the details in your include file that you can reuse across other map files. All right. But there is a bit of a performance in it, because you're actually using a little bit of extra file, I suppose. So the strategy I use is to compile this. Just use map script and like a one line scroll script. Reuse file and then write it, and it'll expand it for you. And then that's what you want to use for your production. Or if your servers are older than 2006 or something like that, you probably wouldn't even notice the hit, I don't think. But we're talking really minimal performance hit here, I think. We don't find it noticeable. Oh, go ahead, if you want to. Everyone wants this one. This has been around for quite a while. In fact, how long has it been, Mike? RFC 6. This has been around in an undocumented fashion for probably eight years. Eight, 10 years? 10 years. Yeah. Go ahead. It's a way to generate a whole assign, take data values from a raster or something like that and assign a whole color range to it. So rather than doing raster classifications on pixels with small ranges, you can specify a start value, an end value, the data range value, and it'll assign a whole color range to that data. Yeah, so the key we're here is the color ramp, right? Everyone thinks the color ramp. So you pass one color, one start color, one end color, and then it interpolates and creates the color ramp in between those two colors and the two values. Yes? Any work on making a legend with that? The legend is quite a common legend. You get no legend. Yeah, that's something we have on here. There is no legend output. No, there hasn't been, but we welcome patches. Contributions are always welcome. Yeah? Just one thing on that. Generally, you have these RGB code spaces in the Internet. My question is talking about RGB not doing the best. So I don't know whether it's any HSV or HSL code space or Google or Snapchat. Well, I think as part of 7.0 for the heat map stuff, what Thomas Bonfort has added to the color range stuff is support for multiple color ranges and also support for HSL-based color ranges. So there is that support now. It's, I don't know that it's fully documented yet, but we'll be adding it to the documentation here very shortly. So that's very cool because when it was first implemented, color ranges, it was a huge implementation, a huge limitation that you can only have one range. And now you can have multiple style objects, multiple color ranges and color ramps. That is raster only. And we have a demo, right? No, not for, I think we're running low on time. Yeah. So, okay. This is just, for me it's a pro tip, but be careful of naming guidelines. Really careful. Especially in map volume layer names with special characters, sort of a no-no, definitely a no-no. It will probably most likely work in map server and may not or most likely won't work downstream like such as OGC, you know, when users or clients are calling layers through services, you want to be careful with numbers at the beginning of your layer names or special characters. And Mike, we hit this when we were preparing this. Do you want to explain that? Yeah. One of the things that we used a colon for were layer clusters. So we could, and we'll show an example of this in a moment, but layer clusters used a colon in the column name for the map file syntax. But it also, it broke WFS output because the colon in the column name gets treated as the namespace value. So that's one of the breaking changes between six, four, the six series and the seven series. That's been changed to an underscore to eliminate that OGC namespace issue. And I've always been really careful with naming because, excuse me, I do a lot of work on Windows. What operating system is that, Jeff? Yeah, and then... Not a real operating system? Yeah. And then deployment is almost always 100% on Unix and then there's, you know, Windows is forgiving for naming and then Unix not so much or not at all, sorry. And so, yeah, save yourself from banging your forehead. Yeah, so quickly onto another feature that was added in... Six. Six. And so many of you are familiar with sort of client-side clustering of points into one single displayed point on the map. And in this case, it's a service-side clustering. So it's an attempt to reduce the amount of data that you're sending to your clients. So instead of, you know, you have a million points or something like that, you can cluster it down to the small, to much greater, fewer representations and then include the data values on the actual cluster that represent that data. You can also query that cluster and there's a special parameter that might be raised and I forget. It's a processing command. I think it said cluster all or query cluster all. And the idea is that you can actually pass that processing parameter and then actually query that, allow MapServe to return those features. So this is kind of the same dataset, the dataset of earthquakes that we have here. But you can see how there's lots of many earthquakes and here we've added the cluster keyword and it returned those into aggregations of the data. And as you zoom in or zoom out, it'll recalculate those numbers, regenerate the aggregations and update the data values. So you have an existing layer object, layer, layer, yeah, layer object and you add in cluster object into that layer. Map layer. Yeah. Yeah. This is a new one that just added in the 6.4 release or 641. In the past when you had raster tile indexes, they were limited to a single projection and you had a single map file layer per raster tile index. With Google or GDAL 1.11, a new keyword was added to the GDAL T index command to add the spatial reference system of the raster dataset to the tile index and MapServer can read that now. So you add the SRS name so you can have all kinds of spatial references within one tile index, only for rasters, however. And then in the map file, you just add the new tile SRS keyword and point it to the column that has that value. And so now you can reduce many, many layers that we support, you know, almost, we have raster data across at least 25 different UTM zones. This allowed us to reduce the number of map layers that we do for raster stuff by a factor of 25. We reduce it now to a single tile index. This could also be one of the most important pro-tamps of our perfection. It depends on how often you use raster tile indexes, but if you are using it a lot, this is a big one to make your life easier. This is one that really affects post-GIS and Oracle especially. For a lot of operations, MapServer needs to know the data extent of your data, and it will actually query all your data in order to get that data extent for a lot of operations, get capabilities, WMS operations. So you can set the OWS extent metadata for your data, and it doesn't have to be the exact value. You can set it to a world extent, even if your data doesn't cover that world extent, but this will prevent MapServer from running through your data set, trying to calculate the data every time that you're accessing your data. So if you have 10 million records, it's going to run through all 10 million records to get that data extent and calculate that unless you set the OWS extent. No, Mike and I were debating this about if actually setting a world extent actually helps MapServer. It's not MapServer that's running through all that data. It's the back end. Right. But MapServer is asking for that. So yeah. Right. So it is a good tip to set that metadata value in your layer. Yes. So the extent of the layer? Yes. Yes. Does it not limit the data? It does not limit the data. No. It's just used for the actual OGC for the get capabilities list, for the WFS list, but it doesn't actually filter the extent. So if you request an extent larger than that, MapServer will actually make that extent even if it's outside the OWS extent. He was asking about what's the difference between using just the regular extent parameter at the layer level. Right. Yeah. The OWS, the extent that when you make an actual data request is what MapServer actually does to fetch the data. This is kind of a precalculation that it does to determine what data extent your data has. It will actually make two passes through the data if this is not available. Or MapServer can't determine what extent your data is. And this is specific, this is specific for OGC request, right? Primarily yes. Primarily. You see it on the mailing list when someone hits that. Why is my Oracle spatial so slow? Why is the get capabilities 20 seconds long? Or never finishing? Never finishing. And this is almost always the answer. Set OWS extent for... Yes. Is this something set in the map file on the layer object? Yes, for each layer. Yes. For each layer. And it's not a bad thing to do that for other connections as well. So I know it sounds kind of annoying, but... And the final one is just use syntax highlighters in your various map file editors. There's a lot of different ones out there and we try to aggregate a bunch of them here. There are syntax highlighters for Sublime, UltraEdit, Notepad++, TextPad, and even Old School Vim and Emacs. So these are the kinds of things that just make your life that much easier. And Scribe has syntax highlighting as well. Which hopefully will become part of the map server project here pretty soon. So that's exactly 30 minutes. All right. Thank you.
|
MapServer is a fast, flexible and extremely powerful tool for creating dynamic maps for the Web. Underneath the hood, MapServer offers many powerful and advanced features that many users never dig into, and new features are being added constantly. Come learn about some of the more advanced features of MapServer, from heat maps to 3D WFS services to exporting data to GDAL file formats to very complex symbology and labeling. Learn simple and advanced use cases and debugging techniques for some of these advanced features from two presenters with over 20 years combined experience of using MapServer. A live MapServer instance will be used during this presentation (yes we are crazy!).
|
10.5446/31714 (DOI)
|
My name is Ryan Bowler. I'm here to present with Mike McGahn on behalf of this team at NASA. This is a collaboration between NASA's Goddard Space Flight Center and NASA JPL. And somebody who used to be at JPL who's now at a mapping company that I won't say out loud here at an open source conference. So what we're going to talk about today is open geodata that we have, open services that we now have, and open software that we have to handle these things. So to start, I want to talk about the spacecraft that we have and the observations that they make. So it all started back in 1960 with Tyros-1, and this is an artist's conception of it, majestically soaring through space over a hurricane. And what came back, imagery-wise, well, it looked like that. But the thing is, it was the first time that we realized that you can use satellites to look at the Earth, effectively for weather. So it was a great starting point, and we followed it up with the Nimbus series of spacecraft, which was launched in 1964. There's another artist's conception. We got the data back, and it actually looks a little bit more scientific. We've got some scale bars there and some things that look like an orbit track. And you have people like this who actually operate the ground station, so you know that we're making progress because look how fancy that machine is. So you fast-forward to today, and you've got about 15 or 16 operational satellites in orbit. And these are pretty different than the commercial variety that you might see. And I'm personally a little envious of some of the things that Planet Labs is doing and the high-res spacecraft. But these guys cover a lot of scientific domains, and I actually get into the details of what kind of things they can see here. So the first sample image I'll show you here is from our E01 spacecraft. This is a volcano in Indonesia from last year. If I'm showing a volcano image, that means there's going to be a before and an after. So here's the before, and here's an after. So this one had quite a big impact when I interrupted earlier this year. You can flicker between the two here. That's a 30 meter per pixel resolution instrument there. This one is from MODIS on the Terra spacecraft of a snowfall from the east coast earlier this year. And this is a pretty picture here of a reflectance product that we have called corrective reflectance uses the red-green-blue channels of the spacecraft. But we can also build scientific products in the other bands. This case, MODIS has 38 channels, I believe, of different wavelengths from infrared through visible. So you can start to do things like quantify how much snow actually fell. This is a snow coverage map here. And you can tell it picks out the snow pixels mostly, and it doesn't pick out the clouds. And so you can start to do some sort of quantitative analysis here using NASA data. Likewise, this is a different kind of measure of water here. Using MODIS again, this is 250 meters per pixel California from earlier this year. This is a measure of vegetation. And as anybody who lives in the west coast knows, California is not doing very well this year for vegetation or water. So I want to talk about using these observations. So we have got a long history of making these observations. A lot of these instruments have global coverage to them. There's a ton of scientific products that are available as well. So it's really exciting because there's a lot of diversity and richness, I think, to the data. And now we're in the modern era these days, so you don't have to use these old reels. Things are online. This is the golden age of data in some sense. But as most people work with a lot of data know, you know, you don't usually have, you have some problems like the data formats are very different, different processing levels, different resolutions, and of course, the geospatial data, the coordinate systems are never the same. So also a specific problem that we have in NASA is that a lot of our data sets are time varying, so that's something that's not very well supported in O.G.C. or across a lot of tools. So I'll address that in a little bit, but that's a big challenge of ours. We've also got a ton of data as well. So every day we get about 8.5 terabytes, and our total archive is about 10 petabytes right now. But what we really want to do is to be able to open this data up for outsiders to use, so to speak, instead of having to spend your career in grad school learning how to use this data, it should be a little bit easier to use. So new solutions. What are we doing right now? So we're trying to make things visual. So we're basically, we want the users to interact their data visually, to discover data visually. And to that end, we have a set of services that are, that provide open access. So there's no registration key, just you use it, use it, tiled data sets or tiled imagery to the data sets. And it's open to use for mapping clients for GDAL scripts to GIS clients. I'll give you these in a second. We also have an open source browser-based client called WorldView, which Michael talked about here shortly. So it's in the background on how we're doing this. So if you've used NASA data, things are locked up in these binary granules. I guess I don't have a mouse. So one little chunk here is considered a granule. We're working with our data providers to basically project that data, apply whatever algorithms they need to, they apply quality control, and then they rasterize it into some nice imagery like that. And you repeat that again for the next granule and for the next granule. Then you have a whole days' worth of these granules. And what you can do then is assemble them into these full resolution daily global mosaics. And then you do that again for the next day and then the next day. And then you do that again for different kinds of products. So in this case, it's a sea surface, oops, let me do that, sea surface temperature product. And we've got about 70, 80, 90 products or so far right now that are in Gibbs that show a combination of, well, actually I'll show you in a minute. So anyway, once you have a lot of this imagery, then we need a way to ingest it first. To that end, we have an imagery ingest system called TI or the imagery exchange. That will be open sourced next year. We have a tile storage format and a tile serving server. Those are metaraster format and on earth respectively. And if you saw Joe Roberts talk earlier today, he went into the details of those. Those are both open sourced and available on GitHub. And particularly the storage and the serving software that's been around for quite a while which has been open sourced recently. And it's, it don't have a whole lot of time to get into the details because these could all be individual talks in and of themselves. But our goal here is to have a high performance raster tile server. And that's what this achieves. So basically, once you have that imagery then as tiles, you can serve it out to clients. You can access it through GDAL scripts and also through GIS clients. But there's a caveat on the GIS clients in that not a lot of them support time-varying data sets. So that's really a big challenge for us and we're working with OGC to handle time a little bit more robustly right now. It's, yeah, it's a challenge. So WMTS, the WebMap Tile Service is our primary spec. We've got KML and a Tiling WMS as well. But here's just a sample tile request that's a restful access where you've got a product name, a time, a projection, and your zoom level and row and column. And you get back a nice tile. I think that's the Canary Islands in this case. So the types of imagery that we're serving right now, we've got these reflectance kind of products. So there's natural color up in the upper left. There's a false color, there's a couple of false color band combinations in the lower left there. And on the right, it's a kind of sort of the science parameter kind of renderings where in this case the snow cover is in blue and the sea ice is in fuchsia. And then the clouds are up here. That's the base layer. We've got four different map projections that we support. So we started off with geographic and we extended to the poles. And then, I love or hate it, we also have the Web Mercator projection. So it's compatible with a lot of other commercial sources. In terms of the products we have in there now, we're really a modus shop at this moment. There's a couple other products in here too. We've got AIRs, OME, MLS. And let's draw your attention to the resolutions we're talking about here. This is 250 meters and Corsair in some cases here. In the future, we'll have Aster and a couple Landsat products. It'll be down to 30 meters. And right now, we have about the last two years of imagery available. And we're working with modus to reprocess everything from history. So that'll go back to the year 2000. So what does 250 meters per pixel look like? So this is as good as it gets here for modus. This is a clear day above San Francisco from last month. Two months ago, yes, no, last month. So you have this basically globally twice per day from two different modus sensors. It's also available within near real time. So within three to five hours of it being observed, you can pull it down and do what you need with it. So that helps us do things like applied sciences, like looking at floods, looking at wildfires, doing some shipping. So here's a five day period here, looking at a wildfire outside of Yosemite from last year. And then at the poles here, this is near no, Alaska this spring, looking at sea ice. So if you want to plan when to get your, this is daily imagery. So if you want to plan when to get your icebreaker through, this sort of thing helps with that. So that's what we currently have. Michael showed us some more samples of the kind of imagery that we already have. In the future, we're working, we're really excited about getting Swillamy NPP imagery in, which is a lot like modus. But what modus doesn't have is this day night band. So basically it's a night band that lets you see imagery at night from the city. So you get a lot of city lights. You get some fishing vessels you can see. You get some fires that are burning at night. So it'll be exciting to see. This is Nile River Delta. There's a Landsat product that NASA is working on called Weld, which basically takes the cloud free imagery on a weekly, monthly and annual basis. This is a monthly product, I believe from 2011. And what you can do is take the best monthly products then and then make these pretty annual products as well. So it's at this point where I need to say we need to help because we've got a lot of imagery that we want to bring in and not enough people to work with. So forgive me as I take a quick detour and say we have an open position at Gibbs. So if you like working with satellite imagery, you like to operate Linux systems. You want to help us figure out what to do next. We've got a full-time position at Goddard outside of DC. So follow up if you can follow up with me or Matt Giacchini here. So back to the program here. Showing how other people are using this. EPA has this mashup they've created which shows their ground stations which measure air quality here as these point sources. And they've underlaid as a base layer, one of the daily modus imagery served from Gibbs and you can see that there's some smoke here and so it provides some context for the ground stations. Likewise, here's another Gibbs product that measures aerosols in the air and they use this for context as well. MetroDB is integrated with us as well so you can do some of your visual storytelling with our imagery. Mapbox emailed us very politely one week and said we want to scrape your entire archive and how can we do that without, you know, really annoying you guys. And they did and they did a great job of it. What they did is basically take all of our cloudy imagery in the upper left, sorted it to pick the best, cleanest ones and merge them on a best pixel basis and you have their cloudless atlas which their moderate zoom levels are from Modus imagery. I'm sure I'm horribly oversimplifying what they're doing here but you can get the idea. It's also being used in a museum setting as well. So here's a science on a sphere, it's a five, six foot diameter sphere that shows imagery playing over time. But really what I would pose is, you know, these are open services so we want the community to build whatever they want, whatever meets their needs with our imagery because we think this really opens things up quite a bit. So we have some live examples and some source code which give you some bare bones, clients in leaflet and open layers and Google Maps and Bing Maps and you can build your client that way. So I think there's a ton of room for, you know, educational apps, science apps, decision-making apps, mobile apps, surfing apps, you want to know what the ocean temperature is and dreaming about going somewhere where it's warm. So take a pic. So I'll pause here and actually I'll turn it over to Mike who's going to talk about our client that we're building with using Gibbs called WorldView. So I'll flip you over to, I don't know if you can make sure that's fine or do you want me to go right through the video? Okay, so normally I would like to give a live demo of this but the internet is actually not working out all that well in here today so I'm just going to go ahead and show a video instead but this application is available online at earthdata.nasa.gov and you can check it out later if you would like. Okay, so I'm going to start off with events happening like right now. So this is actually a volcanic eruption in Iceland. I don't know what the name of the volcano is and if I did I probably wouldn't be able to pronounce it anyway. So you can see here that the, so we're using here the 721 band combination with MODIS and you can actually visibly see the lava trail there. I'm a little awkward when it comes to doing movies so be prepared. Okay, so if you click here now this is actually the true color. So this is actually what you see, you would see with like your normal eye. So you can actually see the smoke balloon coming out but when you actually have those additional band combinations you can actually get more detail and more information from the surface. We also have a brightness temperature product and you can click that and you can actually see as well that there's a big anomaly there in the temperature. And so this has actually been happening for a couple of days. So this was actually, I was showing before, actual imagery from today. Okay, and we can go back a couple of days and see how it's evolved over time. Some days are cloudy and you can't see through, other days are good. Okay, so here we actually have some fires. This was, I got too feisty with the mouse pad. Okay so these were some fires around sometime in August. This is like the border between California here and then Oregon up there. You can see here all these are red dots is where the cell is picked up that there's a thermal anomaly. And you can see there's actually a huge cloud there that came from it. Actually, I had a friend who was in the area at the time. He says it was pretty intense. You can zoom out and you can actually see sort of like the extent of the wildfires that were happening there at that time. Okay so this is actually, when I took this video, this video I took like around lunch time. So as you can see, this is sort of like showing the imagery from today. This is basically how far in the satellite they got in the data point. Mine is the processing delay of about three hours or so. So as the new imagery pops up, it is immediately available for viewing here. Also have these orbit tracks enabled so you can sort of see when the satellite is supposed to be passing overhead. So one of the big motivators to start this was actually be able to visualize near real-time imagery as it came in. So some other things you can do with the application that we have built using Gibbs. Actually, one of our popular features is actually just be able to do a image download. So you can basically just pick a region that you like. You can choose what resolution you want. If you don't want something that's a completely like large image, you can pick a different file type. I hit the download button and it will generate your image for you. And then you can save that to your local disk. That's right. So before we actually support different projections, so this is looking at the Arctic and also the Antarctic. You can see there's actually a big hole here because actually there isn't any daylight down there. So you can actually go back through time and sort of see how the hole changes depending on the season. That's one of the fun things about having a temporal component. Okay, we have about 150 layers of cell in here right now. Basically, you can do a search on the left, pick the layer that you want and it appears. These are fires that are currently burning down in Africa, mostly for vegetation, for example, clearing agricultural land. And you can actually put multiple layers on. You can put as many layers as you want. You can sort of filter them by areas of interest as well. You can always change the order if you don't like the way it's being rendered on the map. And you can actually change the color palettes if the data starts interfering with each other to get a better view of it. And we also have the ability to actually download the underlying data. So if you're actually like a scientist and actually see something you like and you want to download it, we actually provide a pretty simple interface where you can just go and click on the granules that you want to download. So you pick the ones that you want. That's all you have to do. And then simply click on the download button and it will actually give you a list of links of where you can actually obtain that data. Okay. You want to share what you see? You can cut and paste a permalink and send it through email, Twitter, whatever. And as a last thing, we're actually doing some older imagery as well. So it's actually an AMSRE product that actually shows CIS. And here at the polls, you can sort of see how the extent of the CIS changes over time seasonally. So that's a quick overview of what we have done with WorldView. Like I said, this actually gives us an open service that can be used by anybody to build whatever client that they want. Or if they have existing clients, they can use and incorporate that data inside their stuff. So you've got any questions? Does Gibbs have a metadata service like being able to display when new motors imagery is available or what the extents of imagery? You mean which, I'm sorry, can you repeat the question? The extents of what's available with Gibbs? I'm asking if there's a metadata service to query what the dates of available images are over their extents. So there's a part of the WMTS spec. There's a get capabilities request that you can make. And it'll tell you the start and end times that are available. Most of our products are global, so there's no, usually no global or no kind of regional extent that it'll give you. Hey, Ryan, just to follow on in the question, I guess, is your search service available through some REST endpoint and can I get better from the search results rather than just typing it in that screen? What's that for the actual search for the data for download? So we actually, that's actually querying the echo catalog to get that information. So you got what an open search API? Is that what you used? No, just used a straight up echo one. So can that be just to read through your echo here so I can go to echo? Yeah, well, the thing with that is that since you can use echo and you can use reverb, you know, we're not trying to reimplement it. You know, that's not the focus of our application. But we actually want to provide a simple way where you can actually get what you like there. So forward's worth, they're supposed to have sub-second search available this year. So whenever you click, you know, search for granular download here, it shouldn't soon be fast. That's up to them. Just a quick question. You mentioned you will have the future plan for VSDNI band imagery. Will you have any plan to incorporate the history called DMSPOS imagery? The historic which imagery? DMSPOS, because they're part of the US data product. Sure. Not right now we don't. That would be nice though for continuity, right? Because it's the only other nighttime product available. No, unfortunately, not right now. Yeah, that's from like 1992 to 2012. Yeah, yeah. That's what I mentioned. Yeah, no, thanks for the suggestion. But no plans right now, no. Any more questions? Any answers? Okay. Okay. Thank you for coming. Thanks a lot. Thanks a lot. And, yeah, congratulations every day. Thank you.
|
The satellites which comprise NASA's Earth Observing System (EOS) have a long history of capturing rich datasets with global coverage over extended periods of time. While the data itself is rich (and open!), it can be a daunting task for uninitiated users to find suitable datasets, learn the data format, and subsequently find interesting phenomena. Even for those who are familiar with the data, it can be a time consuming process. But thanks to the proliferation and maturity of open source geospatial software, NASA has been able to build an imagery ingest pipeline, open source tiled imagery server, and open source, web-based mapping client to encourage exploration and discovery of NASA datasets. This talk will describe how NASA is building these capabilities through the Global Imagery Browse Services (GIBS) and Worldview client, demonstrate how others are building upon them, and show what it takes to integrate NASA imagery into clients using the GIBS API.
|
10.5446/31720 (DOI)
|
Who am I? I think of myself coming from the computer science side of things first and then kind of the environmental science side of things second. And my goal in my work life is to kind of bridge the gap between those two. My main job is I work for NOAA as a contractor doing oil spill response and stuff. But again, this has nothing to do with what you're going to see today. This is really kind of a hacking project, but that's a little bit about me. I also am a fanatical mountain biker. I have two kids and I live on an island. So how did we get here or how did I get up here in front of you? It really actually, David Sheen, can you raise your hand? I think you're in here. I said, there he is, Iceman. We run an open source GIS group out of Seattle called Kugos. It's a very, very active group. David Sheen is from the UW. He came and gave a talk. I'm the type of person that I'm attracted to shiny objects. And he showed a couple of slides, like what you're seeing right here. And I was like, what the heck? He's up in an airplane with a SLR camera taking pictures of Mount St. Helens and coming home and spinning these 3D models around where there's images up in the air. And I was like, that is crazy, crazy stuff. So that was like kind of my introduction to this whole concept of structure from motion. And that kind of got the process started. Then a couple of Google guys came to one of our meetings and they literally come in with backpacks with these quadcopters on their back. And they're like showing us all this stuff and they were flying around big ones and little ones in this little room with us. And I, within like seven days, I had one delivered from Amazon to my house. It was really, really cool. Another shiny object. And you know, I passed it off to my wife. It's like, well, it would be really cool. I can take pictures of my kids on bikes and stuff. So like this is actually a snapshot of a video of like you can see the shadow of the quadcopter and taking pictures of bikes and stuff. But really what I wanted to do is play with this stuff. I wanted to figure out the stuff that David Sheen's doing, structure from motion, and I wanted to fly the quadcopter because I think that's kind of amazing. The interesting thing is it's a convergence of, right now, is this convergence of technology and capability with this stuff just going consumer. So, you know, what used to be thousands of dollars before, now you can order on Amazon literally for 450 bucks and you can have one of these delivered to your house by tomorrow. And I imagine a few people probably will. This one in particular, it's called the FC40. It's from a company called DJI. It's called the Phantom. It comes with a little camera, a little 720p video camera. And that's what I first started with. I tried to take imagery with that. Turns out it's not actually a very good camera. It takes really good pictures of my son doing his bike stuff. But when you want to actually do anything real, it doesn't really quite cut it. So what I did is I just looked for a cheap, inexpensive 16, you know, 15, 16 megapixel camera that I could strap on to this thing that was a Canon that I could hack with the CHDK firmware. And so this is the camera that I came up with. There's lots of, you know, people post different cameras that they've used. It's a lightweight camera, 16 megapixels works pretty well. All this stuff is up here you can see in the future. So now we're up to, what, $580 or so. CHDK is sweet. You can, the reason you want to hack the firmware is once the thing is up in the air, you want to be taking pictures. So basically you can hack an intervalometer into there. So I can set it to like every two seconds or three seconds or five seconds to be taking pictures. So you get yourself all ready to go, hit go. It's taking pictures every three seconds and you can go do your flight and gather imagery. You also definitely want some prop guards because you run into lots of stuff. We'll demo that later outside. And then an extra battery for sure. These batteries really, you know, there's newer versions of the Phantom and that's one of the reasons the price on this one is so low. But realistically you get about 10 minutes per battery. And if you're going to go out and do what I'm trying to do which is kind of map our small town, you need at least two or three batteries to do that. And then, again, I had to hack it with a camera mount on the bottom to put the extra camera on there. And you can come see that as well. And so this is what you end up with is basically a forward facing first person view capable camera. Your cell phone can hook to the little forward facing ones. You get a real time first person view of where you're flying. But then the downward facing one is the one I'm interested in. That's the one actually capturing the imagery. So why is this interesting now? Well, it's phenomenally cheap. It's super easy to fly. Like scary easy to fly. That's my nine year old daughter collecting imagery. Like it's really easy to fly. Because it's GPS enabled, it locks onto satellites, you fire the thing up, you put it up in the air and you just let go of the controls and it will stay there. When blows it off to the side, it re-corrects for itself. It's amazing. Super easy to use. And they are fun. They're really fun. Little scary and fun. Oh, I wonder if my internet's not working. But anyway, Mapbox posted a really cool map which is one tile. And as long as you're in Portland, it works really well. Anyway, basically, you know, I'm not going to address the legality of any of this, right? Because who knows where it's legal and where it's not. You can't fly near airports. You can't fly in national parks. You got to stay below 400 feet. I'm like, I'm good. I live on a little island where none of that matters. So I go fly. But if you want to do the legality of it, check out their map. It's pretty cool. They intersect all of those known no-fly zones. And then, you know, people like the FAA post things like this. And it's like, well, are you a model aircraft? Are you a drone? Are you a whatever? You can't even like classify what you're actually doing. The important thing is it's kind of a no-no to do it for commercial stuff. So I'm not doing anything commercially. I'm just doing this as a hobby. And you kind of do fall into this model aircraft thing and there's a list of dos and don'ts. But anyway, what we're really here for is like, what can you actually do with this thing? Because it's cheap. Anyone can go do it. And I'm interested in my local community, how I can help my local community do some mapping and actually do something useful. So this is a story of that. Langley, Washington is basically just north of Seattle. Again, on this little island, about 1,000 people, about a square mile for the town. So it's a perfect little test bed. I can go up there. I can fly it. Turns out they did a reconstruction of Second Street, Main Street going horizontally there, over the summer. So we have a project that just happened and places like Google Maps don't have it yet. Okay. So that's a perfect test case for let's go out and see if we can gather some imagery and do some stuff to help the city out. First one is can we just come up with some sort of stitched imagery for that? I went out, this was just like a week ago. I like to fly super early in the morning so there's no one around. There's no cars. There's no anything. So I went out, collected about 200 images, two flights, about 10 minutes each, taking images every three seconds. You'll see the flight path here in a second. This is what you get, you know, for a single image. And then basically I use two options and again, I, this is not an exhaustive search of all the different options to do this stuff and I would love to hear other people's experience and how to do stitching, three-dimensional modeling, all that stuff. But the two that I came into that were easy and that actually worked. I tried Hugen and a bunch of these other ones, some of them didn't work. Ice, it's a Microsoft thing that was, I think, out of their whole photosynth workflow early on. It's not open source but it's free to use and it does an amazing job. You just dump pictures in a directory, point it to it, it stitches them together, it actually looks pretty good. So this is an example of a stitch of some of the selected images. Basically I took one elevation where I was flying one pass down and back and stitched those together. It looks pretty darn good. The other is what we'll see later is this program called Photoscan from a company called Eggysoft which is not free but it is mind-blowing what it can do. So this is kind of giving away some of the 3D stuff in the future. But one of the other things it can do is do stitching from this model that it generates. And this stitch is absolutely breathtaking. You can dive all the way down to like, you know, pixels on that car that's on the street and it's pretty amazing. So I had pretty good success getting a stitch and to be honest, like the people at the city are like flabbergasted because they just did this project. They have no kind of evidence. They want to go to conferences and talk about, you know, the walkability of the city and all that stuff and we can basically just dump them this image. They're blown away by it. It's awesome. Workflow number two is how can we actually make that useful? Can we actually put it like in a web map and can we do something like dump it into an editing tool and do some OSM creation? And so basically I took that best big stitch that I had, threw it into QGIS in the rectifier. You can actually pull up like Bing Maps or whatever on one side. Your image on the other, drop ground control points, uses GDAL to warp it. It comes out great. Another one that I played with is Map Knitter. It's an online one. I found this one actually pretty difficult because it's image by image. So you have to do one image at a time. It's very time consuming. But again, you can download your warped images afterwards or shove them directly into OSM. And here's an example of downloading them. What I did, though the workflow that worked best for me is actually to dump that stuff out of QGIS. I have one big TIFF basically run near black on it to create an alpha channel. So it's got nice transparency around the edge. Ran GDAL to tiles on the command line real quick for zoom level 1 to 22. So I have all the zoom levels that I want for my image. And this is all pretty small. This actually kind of leverages my talk from yesterday which is about trying to host stuff on GitHub for free. So I pushed those tiles that literally the image tiles, the 256 tiles, that whole folder structure up on GitHub on Git pages. And I get basically little end points for all the tiles, the TMS tile scheme up on GitHub and then I can throw those into a little web map. And I have basically now my city for free has a little app demo that includes that imagery. The other thing you can do is you can actually just point ID directly at that tile scheme as well. And that's my image inside of the ID editor and then I can go in and I can edit all the open street map data, you know, the new parking stuff and, you know, how they change the street. It's pretty darn cool. And workflow number three is actually kind of the cool stuff. I haven't figured out how to make this actually useful for anything yet. Like I haven't actually made a DEM or, you know, anything that I would use at work but it sure is cool. I mean, it's really cool. So there's two workflows for this. There's actually three but we couldn't get the third one. The first one is to use a thing called VisualSFM and it's a completely open source project and I started with this one and to be honest I saw the outputs and I was blown away and then I saw the outputs of the AggieSoft one and you get even more blown away. So basically what it does is it takes the batch of images in. So here's like the 200 images, loads them up. It basically, the whole concept is that it's going to take all of these images and it's going to look for features, structure inside of each of those images and it's going to try to identify the structure and the individual images. Then it's going to go through like a matching process where it's actually going to take each of those images and try to match it against the other. It's going to try to look for common features and place those images kind of next to each other, that those images know about each other. And once it can do that, it can look for how those images differ. Things like, you know, when something gets further away, it's going to converge or, you know, the warp because you're seeing it from the side versus straight down and it's actually going to try to place those images then in three-dimensional space to make it all work out, to make it so that those images mean something, that those features could exist and match. And here on the left you can actually see, this is VisualSFM trying to do those feature matches and there's a bunch of cruft around the outside but it's smart enough to be able to kind of throw those away around the edge. And on the one on the right you're actually seeing the top is one image, the bottom is another and it's actually, those lines are connecting similar features. So you can see like the parking strip lines, it's identifying the end of the parking strip and one to the other and it's matching those up and it's saying these two images are correlated and then in order to be able to make that happen, it can place them in 3D space. So all those little squares are basically the images up in air where it thinks you took them from the quadcopter. Where those features converge on the bottom basically generate a sparse point cloud for you on the bottom and that's what you see. So you actually start to have some 3D structure and from that you can actually generate, you know, through some interpolation a dense point cloud. The more overlap you have with your pictures, I mean there's a lot of tweaking you can do about how you fly and how you process this stuff to get better and better surfaces but this is what you get, this is a pretty good flight for visual SFM to get a three dimensional model of downtown Langley. I find that fascinating because there's no GPS, there's no special tags in the images, this is a commodity camera, 20 minutes flying over your town and you have a three dimensional model of your town where you can look at building heights and all that kind of stuff. Phenomenal. You can take that and then dump it into other programs, you can look at things like MeshLab where it basically takes the point cloud from visual SFM and then you can do things like generate surfaces in MeshLab and then do things like texturing to try to actually take those images and project them back down onto the surface to get kind of this video gamey 3D kind of thing going. So that was visual SFM and MeshLab. The other one is this Aggie software which is mind blowing. It comes in two versions. One is actually really cheap and everything that I've done so far which has no geo referencing, no semblance of tying your 3D model actually to the world is pretty cheap. It's like 175 bucks. If you want the one that can do the geo referencing and the scripting and all that other stuff it's like $3,000 or something. So I have a demo version and just been playing with it. But anyway, this you can see is basically the same thing. The cameras are kind of placed up above. It does a sparse point cloud below so it actually looks very similar to what was happening in visual SFM but the dense point clouds and the texture mapping that it can do is mind blowing. And so this is a three dimensional model of that same street and kind of a closer view of it once the texture has been applied. And I have it running here so I'll show you in a minute. We can actually spin it around and you can see it but pretty darn amazing. So more examples. This stuff, this structure for motion stuff is being used for lots of different stuff. We happen to be geo people and have quadcopters but you can do it on pretty much anything. So this was Peter Coom, another Cougar Sky friend of ours. He was like, hey, test these out and he was at work and he just like chit-chit-chit with his phone or some camera and took pictures of this little tiny doll on his desk and it's amazing. You can like spin it around. It looks like a real three dimensional thing. It's just from again, from a commodity camera doing three dimensional modeling. That picture of my daughter doing the flying, this is actually her elementary school. We had a science day there and we let the kids go fly the quadcopter and then we generated a 3D model and they can like fly into their school basically and see their school. It's just fascinating. This is where I live, my little farm so you can go and look. You can see the bike jumps on the lower left and all that stuff. And then we've been trying to start to do some comparisons too. So this is actually same data set between Photoscan and VisualSFM as far as dense point clouds. And every time we fly we try to like do something different like straight down imagery, a little bit of an angle. Like should we take pictures every three seconds to get more overlap or is five seconds okay? Should we fly high? Should we fly low? And we're kind of coming into the sweet spot of making sure that you get enough that you get overlap but the battery is only less ten minutes and you want to get big areas. So it's just fun to play with. So the future is that's kind of the third workflow which we never really got to work yet but there's another thing called Bundler SFM which is basically an attempt to do all this kind of in batch on the command line in a way that we could think about scaling up servers to throw these images at because that one of downtown Langley basically I thought I was going to melt this laptop. Like it sat there for like eight hours straight at 100 C plus on the CPU just burning on my desk. And it would be great if we could just like, you know, my little rack of servers I could go throw this stuff out there and do. It's complicated, it's not really well supported, it's all kind of researchy university type stuff so we haven't really got that working yet. The other is to actually do something like pin this with ground control points and generate a DEM of my property. That would be cool or downtown Langley so they could look at building heights and all that kind of stuff. And the other is 3D printing and that one I actually have an example of. So automation, Bundler SFM, DEMs, there's actually papers out there, this came from, I didn't put the reference so don't kill me, but this was from an academic paper where someone went and there was like 30 meter DEMs in some remote place in Africa and this guy flew over with a drone, did visual SFM and the one on the left is actually this really high resolution DEM that matches amazingly close to the official 30 meter one that he had to do local data collection and super cheap for nothing. This is, Chris Schmidt, I'll wave to him on the camera because he's going to watch this later. I wish he could be here, you know, he wrote open layers, he's been in this community a long time, couldn't come out this time, he works for Google now, but he chases shiny objects like faster than I do. So as soon as I showed him that I had one of these, he like went out and bought one the next day and he's been flying his with a GoPro and so this is a still out of, he was capturing imagery of this building the other day. This is a point cloud capture in visual SFM and then he threw it into MeshLab, trimmed off all the cruft, you know, all the stuff that was around that building. But then he took it one step further and he actually threw this into his 3D printing software for his little 3D printer, he had it home and he went to bed and he was all excited and he woke up in the morning and he had this. Which again, I mean it looks like just a pile of goo but it's phenomenal. It is phenomenal that he could for like $600 have a quadcopter and a camera go fly a building and have that on his desk in the morning. That's it. I do have a photo scan running right now. So this is, you know, you can basically see the cameras up there but then you can literally drop in and see, you know, the three dimensional view of the town. Again, this is, there's no inherent GIS in this. This is literally taking this commodity $100 camera and throwing it up in the air and taking a bunch of pictures and you can generate these models. It's mind blowing. Hi. I'm from GeoCague. We're a non-profit providing GIS implementation for small communities. We're doing a pilot project with City of Hood River. We are very interested in these kinds of projects. I'm wondering if we could work with you to find some candidates to replicate what you did with your community with some other small communities. Absolutely. I love Hood River as well. And they're gung-ho with GIS so it would be an easy sell to, if you want a vacation in Hood River with me, I'd love to do it. But we were, I've got a photogrammetry background so I've done lots of ortho rectification. Have you seen ortho rectification options that you tried and didn't work or you haven't found one yet? Pretty much. You guys know as much as I know now. Literally. Like, this is all I've done. So I haven't tested really any other packages. I mean, there's lots of other options as far as stitching, you know, taking individual images and QGIS and doing, you know, using GDOL to do the stitching and stuff like that, being smarter about the stitching, using other software to do the stitching. This is just all I've done. So I would love to try more and more stuff. I was thinking about you making sure we, cool. What drove your selection of this airframe versus the 3D robotics stuff or some of the other stuff out there? It's cheap. I mean, I had to convince my wife that it was okay for me to buy that. I mean, it's, I mean, especially for like companies and stuff, it's like, good Lord, it's disposable. And for a normal person, open source software developer, you know, three, four, five hundred bucks is totally doable. And again, you know, that same, the Phantom a couple years ago was a couple thousand bucks. And they just came out with a new one that is a couple thousand dollars. And it drove the price of this one down to like, I mean, it's a no brainer. It's super cheap. But I do know some people that have other makes and models. And you saw like the guy from Google who had the backpack with the other one. That one, he's got the full strap on goggles for the first person view and all that stuff. I mean, that's way too much. I just want something cheap and easy and fun that I can do with my kids and go map my local community kind of thing. Hi, I really enjoy you say you are playing. Yeah. When I say that to my professor, I am playing with things like that. She gets mad. But actually, I would like to know, do you, what kind of flight planner do you use? Flight plan? Yes. Are you? Yeah. So, I mean, these things truly are remote controlled hobby things, not a fully autonomous drone, right? So, I don't actually pre-it's not like a sense fly where you can pre-do tracks and then you just let it go and you go have your coffee and it lands itself. This one, you got to fly. And so, I try to, I mean, it's purely based on how big of an area you want to do and how many batteries you have and how much of the area you can cover. So, I literally go down, you know, I've only done Langley a couple of times and you go stand at one end and you're like, I think I can get with three seconds a shot going about that speed, I can get down and back, change the battery and go down and back again. And that's my flight plan. I mean, there's nothing scientific about it. Then I can recommend the flight planner program which says this is open source and you can plan more so you can get more out of your batteries. I would, I'll look it up. Cool. Sweet. Do you have one or do you work for them? Okay. Yeah. Just to throw in that I think you're possibly slightly ahead of the UK's national mapping agency in this, in the, the first time they took their drone out they got it stuck in a tree. Didn't actually manage to map anything. You need prop guards for sure. What? We can try to fly it. I get nervous inside but maybe out there like in the, in the line. Over the heads of everyone. What was that? Over the heads of everyone. Yeah. One, a shameless plug. Open drone map is tomorrow morning at 1030. An attempt to put together bundler tools and all the rest. So I was wondering what the, what do you see in, you talked about texture mapping, talked about point clouds and limitations of Aggiesoft versus bundler PMVS, CMVS that sort of tool chain the visual SFM pieces together. What, is it on the point cloud side of things that you're seeing the difference or on the UV mapping or with, with the texture? It's hard to say because you, you know a little bit more about what visual SFM is doing like in the Aggiesoft stuff is kind of you just push a button and they say, I mean, I don't know if you see like their, their thing is this simple. It's like workflow. You align the photos. You build your dense cloud. You build your mesh and you build your texture. You just like press those buttons and drink coffee and you get the 3D model. So it's hard to say which part along that way is really. And you get to heat your coffee on your laptop too. Yeah. But I, like looking at what they call the sparse cloud and what I call the sparse cloud and visual SFM, they look actually fairly comparable and the dense cloud looks really good with theirs and there's a lot of big holes in, in the one that visual SFM. But again, I'm not tweaking any parameters whatsoever in visual SFM. So there's a good chance it's completely operator error and but it's really hard. There's very little documentation as far as how to do any of this. So then I go, I go ask, I'll go ask David Sheen because he uses this kind of stuff a lot more than I do. Awesome. Thanks. I live in a pretty small forgiving community. So no one really gets upset when I go out and do it. With all this software, is it, is it using any of the GPS information from the image itself or is it just completely? No. So there is, the camera has no GPS at all. And these 3D models are just floating in 3D space. They're not actually tied to the planet in any way. Both visual SFM and the AggieSoft photo scan stuff have that capability though. It's the really expensive version of photo scan and it's this other tool like plug-in that you have to install with visual SFM that I haven't been able to get installed correctly to do that. But you can do ground control points and start to pin your surfaces to something real and generate DEMs. And there's papers and videos out there of people doing that. I haven't done that. There is a GPS in the Phantom but it's purely used to assist in the flight of the aircraft. I would think it would make the, make the photo analysis go way quicker to know which photos are beside each other anyways. Yeah. Which we should do. We should, we should strap on a $200 camera that has GPS instead of the $100 camera that doesn't. There's a town in Colorado that's past an ordinance that you can shoot down a drone. Have you considered any kind of defensive techniques against yours? No. I mean the whole drone thing is kind of silly. I mean it's, people get all worked up about it. I mean it's no different than flying a little remote control, you know, whatever. But it's definitely a hot button thing. I tend to have a low profile where I live. I go out at six in the morning when no one else is down there and I fly it and then I leave and no one knows I was even there. Yeah. Yeah. You're welcome to come up and hold it or drop it or whatever. And we can too. I'm more than happy to take it out there. We could try to fire it up and not run into something. But yeah. Thank you.
|
Quadcopter - Phantom FC40 ($500). Camera - Canon PowerShot ELPH 130 IS 16.0 MP ($110). Opportunity to engage your local community to produce open data - priceless.Let's get to the point. Let's talk about hardware and software to get out there and actually map some stuff with a quadcopter. This is the story of my adventures hacking with a Phantom quadcopter over the last 10 months to make local maps... and of course have fun. The only rules... it has to be cheap and the software has to be open source.We will go through the hardware, including purchasing, setting up, and flying the quadcopter. The camera is hacked with CHDK and strapped on the quadcopter with some velcro to a vibration dampener cut up with a dremel tool. The processing software is a pain to install, but we will talk through it including software options, how to get your processing off loaded to your video card GPU, and how we as a community can make all this easier in the future. Finally, we will look at what you can actually make... including mosaics, 3d models, and DEM's of your local community.Quadcopters are cheap, fun, and amazing for engaging your local community to produce open data. Let's do it!
|
10.5446/31721 (DOI)
|
All right, I think I'm going to get started. I wanted to pad a couple extra minutes because I figured people were addicted to cookies and soda. I certainly was there earlier. If you're here, I hope you're interested in JUMUS. If you're not interested in JUMUS, then at least I hope you're interested in learning some lessons about what it takes to keep an open source project running for a particularly long time. Now, I do need to issue this warning. When I decided to do this presentation, this is really the first time that I've done something that's a little self-indulgent. You're basically going to hear me talk about something that's a huge part of my life. JUMUS started in 2004, and that's the year I graduated from university. So this has really been a core part of my adult and professional life. So when people make fun of JUMUS, I used to take it kind of personally. I've learned to move on from that and make fun of them instead. And because of that, there's a good chance this presentation will feature all of these things. And so if you're on the live stream and you are offended by profanity, now would be the time to change which stream you're viewing. So where did JUMUS come from? A lot of people have seen it in its modern incarnation. But realistically, JUMUS came out of a little project at the city of St. Paul. In fact, it came out of Bob Best-Squez saying, hey, do you have some time to look at something? It was literally a tap on my shoulder. At that time, a lot of municipalities unbeknownst to me were starting to really collect high-resolution aerial images. And Bob had happened to collect a whole bunch of them, going back something like 50 years for the city of St. Paul. And we were getting new ones with really high resolution. And we needed to find a way to deliver them to users. We didn't want everyone to have to use desktop software. We just had lots and lots of imagery and other data sets. And we wanted to get them in people's hands. Now, one thing you've got to know about Bob is when he says, hey, do you have time to look at something? It's already more of a loaded question than that statement usually comes. If you've ever argued with him on the internet, which you may have done on any number of OSG O-mailing lists, you'll recognize him as the guy in Comic Sans that's saying a whole bunch of incendiary things. He permeates that into his work environment. So what they showed me was a very small web app. And it was god-awful ugly. The thing was barely usable by even the lowest common denominator standards. And at university, I had actually studied what was very much the early web. I had done work on Nokia phones that were more tuned to playing Snake than they were for surfing the web. Much of my early CGI programs were written in C, because that was the most capable thing to do it in at the time. And had to develop a little bit in Perl and learned with Bob that was a language that's going to have to love. Much of the original development was done in Perl over the web. And that was actually pretty new at the time. And the very first version targeted what was the sexiest web browser at the time, IE6. A lot of people here will talk about how bad IE6 was. But when you had to compare it to Netscape Communicator, you felt pretty good. So we've been doing this for just a little minute. And when we came time to evolve the platform, we actually had an ignorance of what was going on around us. And that ignorance bore evolution. We started writing a configuration file in XML, and that would get translated by an XSLT into a web application that did magic things with JavaScript. And they really were sort of magic things with JavaScript at the time, because we were learning while we were making this. And the web browsers were actually making this technology work as we were making this. We would occasionally run into issues where various releases of IE6, like sub-updates, you'd get on Patch Tuesday, would break the application. We sort of showed this around to our local Twin Cities Map Server user group, which we discovered existed during the process of developing this application. And a few people identified that we had kind of a good idea and a good start, and we had a solid technology back end for publishing a web client. And some folks from Dakota County, Minnesota, and Houston Engineering in Minnesota deserve a lot of credit, because they managed to get us some federal funding. A lot of the JUMUS you see today, at least in ideas, came from an FDGC grant that kept me working during the day and during the night. I literally go into the city of St. Paul in the morning, start working on JUMUS stuff and all my other administrative activities for the day. And at night, the city, Dakota County, and Houston Engineering would pay for us to keep writing JUMUS. There's a lot of work that goes into that. You don't work like that anymore. That's because you ask me too many questions, Bob. So this is where the very kernel of JUMUS came from. It came from two young guys, Jim and I, burning a lot of midnight oil and putting a lot of heart and soul into it. And that's the end of the history lesson, because frankly, the history is going to get boring. I think it's more important to talk about what we learned over these 10 years. Keeping in mind, in late 2003 and 2004, when we started, open layers didn't exist. ARC-IMS was considered the pinnacle of technology, and we didn't really even know about it. And most of the time, people bought someone else's package to run on ARC-IMS. So they were spending all sorts of money. And the Map Server demo, Map Server 3.0, really, really awesome piece of software. But the Itasca demo didn't really offer a user experience that we felt was palatable, which leads me with the first lesson we learned while developing JUMUS. Things will change around you, not just in this industry, but with any open source project, and really with any technology. When you start something, you better be prepared to defend it. When open layers came out, and Skyler and Chris were really pushing it through MetaCarta, we looked at down at it, actually. It was less capable than the in-house JavaScript library that we had developed for JUMUS. Now, ours was ugly. It wasn't accessible. And we were doing a terrible job promoting it as a library, but it had more functionality. So we didn't use open layers for a long, long time. Now, open layers is not even considered terribly to date. People are arguing about open layers 3.0 versus leaflet versus weird amalgamations of Google Maps and the Esri API. And Map Server back then was not a terribly full-featured even Map Renderer. But because we were working at the city of St. Paul, and Steve was actually lying, was still working in downtown St. Paul at the time, we could get his ear after work and say things like, hey, Steve, we're trying to do this with JUMUS. Can you make this work in Map Server for us so that our life is easier? Also, the thing that we noticed when we first started working on JUMUS is that JavaScript hasn't ever had anything that made classes look good at all. It's still fucking ugly. Like, I don't care what you say about loving AMD or loving CommonJS. It looks terrible. And at that time, it looked even more terrible because how we dealt with classes and names was just making function names that kept getting longer and longer. So there's underscores instead of dots. The second major lesson that we learned is that youth really is a brilliant unmatched. Two guys out of university decided that they wanted to make a web client. And we're going to do a really good job at it. Frankly, once we learned what Arc-IMS was and saw some of the applications, we basically thought we were the good Jedi versus the evil empire. And it's amazing what simply walking into work and saying, we are going to screw Esri does for your motivation. I mean, that cocky kind of self-confidence is really, really exhilarating. And I don't get to do that anymore. I can't just, for some reason, don't have the same attitude of walking into a room full of people and saying, you know what? Screw it. As far as Esri's complete garbage, I want to do everything I can today to make their lives miserable. Particularly after a few conferences of doing that when local ESRI reps come up to you and say really ugly things. The first MapsRover's user group is the first time we presented Jumus to a wider audience. And I said a lot of those dirty things about ESRI. And I had three of their representatives come up to me later and say things about libel and defamation. The other great thing about youth is it provides you with time. I used to think that I was a really busy guy in my early 20s. Then I got married. Then I got a house. And then I got two children. Now I'm busy. Take that with the fact that I work for a small business. And you really, really learn that there's a different, that five minutes to yourself on a given day is really valuable. Having four hours to burn on an open source project just because you love it doesn't exist anymore. Another problem that we found is in promotion. There are things that are cool to work on and things that people think are really lame to work on. I can tell you now that an application that primarily prides itself on working really well with parcels is considered really lame to work on. And so despite your best efforts, developers might not come. We actually have spent a lot of, it takes a lot of time, folks, frankly, to make your project organized well on GitHub or SourceForge or on the OSGEO stack, which we've all done over the, which Gimus has done over the years to try and evolve and make our project accessible to people. Documentation, making things more API structured. We have even subsequently adopted OpenLayers and Dojo and other libraries to just try and attract people to its epicenter, but they might not come. And this is a problem because this is what I do for fun. That's a rally car. What you see in the bottom left-hand corner of your screen is me crashing that rally car. 95% of the lines in Gimus are ones I either wrote or touched, and I crash cars for fun. We really need another developer. The other problem you run into is that writing web applications for doing this kind of parcel work was sexy for a time. I think it was about 2007. For anyone who was around then, it was really awesome to write these things that gave Esri the big FU, we can do parcels at the municipal level better than you can, and that's your bread and butter. It's not anymore fun to do that for a lot of people. But the thing is, despite sexy kind of being temporary, there's still a point for it. Even these old dudes wearing gray wigs, that's not really all that attractive anymore, but some people need a wig. Gimus happens to be a wig. Some people need to identify, select, and print reports on parcels, and god damn it, we're going to give them a tool that does it really well. And we want to give it to people who don't have to program, who don't have to do anything. We set up a mission statement, and we're going to chase after it as best we can. So for anyone who uses it for those purposes and loves it for those purposes, thanks for being here. The other lesson I learned as a project manager and as a father is that there's a real great analog between your users and your children. Sometimes you make decisions they don't understand, but it's really for their best interest. And sometimes when they're really complaining about how smells funny, it might because the cat actually peed in the corner and you have to clean it up. So it really is important to listen to your users. To be perfectly honest, I don't even know how many users we have for Gimus. I've got some ideas about download counts and page views and statistics and tickets, and we all have those kind of ideas. But every now and again, I'll get a random email from, oh, I don't know, Mongolia, Sweden, various bits of Africa, people asking me just random questions about their application install. People I've never heard of, people I've never showed up in our download logs. And if, honestly, I could fill a room of Gimus users, not for user conference, not for anything else, but just to tell them thank you for using our product, I would. Because it's like just thinking that someone's out there and I could be making their day better really helps. Free and open source software is open source. And it's software, but of course it's never free. This really echoes something you got from the keynote this morning. In fact, I felt rather jilted because I made this slide way before she made that slide. And when we first started the project and I was working for St. Paul, it was my job to work on Gimus. And I really enjoyed it. And you look at other open source projects, things like leaflet, open layers, or bootstrap, or Angular, any number of these really popular open source applications, open source libraries, they're all fallout from a much bigger corporate effort. And Gimus happened to be fallout from a corporate effort. That corporation happened to be the city of St. Paul, which is a municipality. But in recent days, the city of St. Paul doesn't even use it that much anymore. The place that originated it is still trying to hang on, but they find people, it's easier to find someone who has a piece of paper that says, I know how to run Esri Software than it is to buy a service contract with a small company, frankly. So what happens? Well, when a guy like me leaves a municipality and starts working for a small business, you attach a number to every minute of your time. And that number is a client number. It's an invoice number, and it's a dollar amount. And so sometimes when people would tell me, hey, there are 13 things wrong with Gimus. And I would say, hey, that's $1,300 of my time. That's not a great reaction, either as a project manager or as someone who's trying to work with what they thought was free software. But it makes it hard. It makes it hard to run that balance when you're really trying to keep something going and you're not really being paid for it. So what do you do? You learn to love the beast. I think I don't do open source software anymore because I get necessarily paid for it, because someone's paying for me to contribute to it. I do this stuff because I love it. I do it because there's a passion for it, and I do it because there's really a lot of useful things that people do with it, and I love to see it. Even though we have to make certain decisions in Gimus that I hate. Hate thing number one is please make this work on windows. Well, a county administrator has a server. That server is Windows Server 2003 sometimes, maybe 2008. Might even have service packs applied to it. That'd be awesome. And they want to run a little parcel web application. Well, we're here. So how do you make that not painful for someone who's in that role? Well, the easy answer used to be run MS for W, and that comes with PHP MapScript on it. And all you have to do is unzip this package in the right location, and magically, you have a web application. A couple hours later, some frustration, a Google search, three posts to the user's list, and possibly a back channel email to me, and your web application will be up and running. But that doesn't mean I don't want to put PHP on a ceremonial pile and burn it. In fact, I'd sort of like to take the PHP code in Gimus and murder it like Rasputin. I would like to take it. Shoot it in the head, quarter it, throw it in a well, cover it in a lie, pull it back out, burn it to ashes, and then throw it in the ocean. And that's my kinder thoughts. The other important part about an open source project is to have shameless promotion. I have no qualms standing up here and saying, you should try Gimus. You should give it a try if you don't like it well. But it's really worth being involved in a community, whether it's with an application or whether you're a user, being involved in the community is critically important to any of these projects. And finally, look to the future and have fun. Frankly, this has been probably one of my most fun presentations about Gimus. And again, I'll admit it's self-indulgent. But when you're doing these things, you just have to pick something that'll entertain yourself and run with it. Part of being the guy who gets to write a lot of the code is that I get to make a lot of the decisions. So where we go with it and how we focus it has been up to me with some really important feedback from folks. But if you lose the fun, it's not worth doing. Have fun with your open source projects. Do neat things with it. As promised, I mentioned that. I'll talk about some future versions. Gimus 2.7 is about 18 months overdue. Never ask about when the next version is coming out. Just pull it from trunk. So Saturday, we're going to be doing a Gimus code sprint. If none of you are interested in coding, that's great. Come talk to us about writing docs. We'd love it. We know the documentation is terrible, at least in organization, if not in content. We'd love to see you there. But you also have the opportunity to come and cheer about your favorite bug and ask me to fix it while I'm sitting there. When we start talking about Gimus 3.0, we also have some ideas. Trying to keep the focus on making it something that is easier for people to drop in and customize. We're looking at trying to break out individual components to make it more widget-y. That way, you can just integrate components of Gimus into your current sites. So instead of having a full-featured application, we'll still ship a full-featured application, but make it easier to pull the bits and pieces out so you can use it how you want. We're also going to do the impossible and clean up JavaScript. I don't want to think about that too much. Better documentation, as I mentioned earlier, if you want to come help us with that, it's always welcome. I'll even set you up on GitHub. Get yourself a GitHub account. I'll let you do documentation. It'll be fun. We've also added some functional testing, because one of the things that's important when you're trying to figure out if you can build a liquor store at that parcel, that if it's 1,000 feet from an elementary school, we're actually setting up tests now in Gimus to prove that all of our calculations work. So when people ask about that, you can know that it's going to work. We're probably also going to explore some other deploy methods and get rid of the PHP, because there's a lot of the PHP stuff that's outside of growing out of FAD as a larger community. We need to modernize that code and maintain that code and know that our underlying libraries are up to date, and that's just not happening anymore. And the other one that's a personal favorite of mine is we'd like to add unified searching across WFS and shapefile and post-gis layers for people looking to do the more Google-y bang-in and address, bang-in some rough information, and find it. But do it all within the Gimus framework, so it's pretty seamless and easy to configure. Some image credits to make sure. Don't get in trouble with anybody. Made them small, so you can read them. All right, thank you, everyone. Yeah, Bob. Which 13 things? Yeah, no. We're tracking them in GitHub. Don't worry, there's an issue for it. Yeah. Other Bob? I just have to know, being from Maine, where did you get the name GeoMuse from? We had a top-notch brainstorming session one night and made sure to come up with a good name. Actually, the internal project name at the city of St. Paul was called Gizmo. Again, this is sort of ignorance bearing evolution. We didn't realize that everyone named their first GIS project Gizmo. So when we open sourced it, we had to come up with a name, and Geo seemed appropriate. And then we wanted something that was slightly reflective of Minnesota. And the Minnesota Zoo, their mascot for years was an M that looked like a MUSE. It's actually frighteningly similar to the logo we paid someone to invent a few years ago. But that's where the original name came from. All right. Thanks again, everyone. Thank you.
|
GeoMOOSE released its very first version in 2005. At nearly 10 years old the project has continued to hold on to its original developers and many of its foundation users. Over that lifespan the project has allowed the development team to observe struggles in changing technology, attitudes, and the dedication required to keep such an open source project relevant as it ages.Nearly 10 years worth of dirty laundry will be aired! And a preview of GeoMOOSE 3.0 ideas! And slides with exclamation points!
|
10.5446/31722 (DOI)
|
How are we all doing? A little bit tired, a little bit, a little bit warm, feeling like a bit of a nap. It's a good time to do that. Let's have a bit of a nap. We're going to talk about open data. Let's get going. So, data comes in many shapes and forms. As geographers, we use data every single day, but we should note that data is on an infinite spectrum of possibility only confined by our sort of common understanding of the universe, okay? So in this example, we understand temperature. We probably get the idea of check-ins. I know that 2% is 2 in every 100. I've got a strong familiarity with ice cream. And this factoid is somewhat geographically contextual. It's coming from forceware. So, that's cool. The point here is that I can understand this data point without much further explanation. It kind of makes sense to me. It's a kind of solid general statistic, one which has been derived from a vast array of crowdsourced forceware data. So, it's coming from a whole bunch of other little data points to make one big data point. It's also relevant in our greater understanding of things like ice cream and marketing, perhaps check and behavior and even summertime habits of humans. So, it sits on what we'll call a wide and open spectrum. So, my name is Will and I have a problem. I like data. I do lots of stuff with data. I like trying to understand it. I love the complexity of data. It's kind of like detective work. It's deductive, yeah? I started Spark Geo, which is our little company four years ago. But even before that, I was deeply embedded in data doing geosysical analysis, doing NDVI, doing a whole bunch of stuff with remote sensing, doing spatial distribution of chemotypes of Scotts Pine saplings in the soon to be independent country of Scotland, that point. I helped clean up corporate data sets. I helped do a whole bunch of stuff since coming across a pond, been analyzing forestry data and resources data. But since we started Spark Geo, we've been helping kind of social networking kind of data. So, the magnitude has increased enormously and the type of data has changed a lot, but it still comes down to data, data, data. I would imagine that for a lot of you, the story is somewhat similar that every day you're messing around with pretty weird data. And that's one of your central value propositions is that as a person, you know how to deal with that stuff, the latest thing. So, Spark Geo is a technology company. There was a relatively recent time when clock speed and pixel density and stuff like that and specification would drive technology. Well, that's kind of changed where we care a lot more about features because specifications have got to a point where we're all kind of happy, computers go pretty fast, internet goes pretty fast. We care a lot more about experience and experience more often than not is driven by data these days. So, that means that Spark Geo is actually a data company. Data drives your experience of the internet and in fact, data drives many other parts of the world. It's a measure by which Spark Geo is graded. It's a measure by which I would argue we are probably all graded in one form or another. In the end, it doesn't really matter how good your map technology is because if the data is wrong, the technology is a bit of a failure and it's galling when you've gone to so much effort to build a wonderful map. All the buttons work, all the icons look lovely, but they're in the wrong place. That's difficult. People get upset with that. I think many other companies and organizations probably see themselves the same way. I would argue that many companies which used to be something else are probably now data organizations or data companies in some way or form. Certainly municipalities have come such. So, as a data company, a technology company, we're also a data company. We live at the intersection of technology and data within the context particularly of geography. Open data is awesome. So, here's my one kind of yay to everybody. It's worth noting that in BC, we're super lucky. I'm going to say we're, I'm living in BC. So, BC. We're really lucky. The Prince of Government's done a fantastic job. We've got lots of resources and, you know, over the years, the access to data has got better and better and better and better. So, first up, I want to congratulate all those people who made that happen and some of them might be here. I'm not sure. There are certain people in the conference who certainly are involved in that process. But I think the story of kind of progressive openness around data is one that is witnessed kind of across the board. I think we see a lot more openness, a lot more data publication. States, provinces, cities, regions, lots more data out there. So, that's a great thing. Yay, great job, guys. But you might have noticed, I mentioned it earlier, I'm a Scotsman, which means I'm never actually happy or terribly satisfied with the situation. So, as a Dara Scotsman, I'm going to tell you a story. So, I come from a little town in the north of British Columbia. This is a map of my little town. You might, so some of you might get it, some of you might not. And this is my story starts with a hackathon we held in Prince George. We were looking specifically at open data for the city and regional district to find out, you know, we just wanted to get a bunch of technologists together. We're a small resource town, so technologists are few and far between. So, the opportunity to network is really good. The opportunity to talk about open data is really good. Our municipalities and cities have just been releasing data, so it's really cool to sort of hack away on that. So, we had various teams doing various different things, there were different ideas that they followed up on. One team in particular had this problem they wanted to solve. They had a simple idea, they thought, hey, we want to compare the budgetary financials of different municipalities with each other and find out where you get the best buying for your tax bud. We want to understand where I should live to get the best services for the lowest costs, you know, where's the best place. So, this is like from a business perspective, that makes perfect sense. The idea of being able to give the consumers the citizenry, an idea of the best value municipality to move to. Seems reasonable, seems interesting. It turned out to be quite a tough, tall order. And that's mainly because no one is really talking the same language. No one is talking the same language. And by language, I don't mean spoken written programming languages or even data transfer formats. I'm talking about the raw absolute data points, the numbers. The numbers published by different municipalities mean different things, which means there was no opportunity for any level of comparative analysis. The hackathon team were left comparing apples with oranges because of the vast spectrum of data we talked about. The municipalities of BC had found themselves seeing and measuring the financial world in slightly different ways and that slightly different perspective led to slightly different financial data products which meant completely different data products, which meant no dice for the hackathon team. The point here is not to beat on those municipalities. It's not really, you know, they've come through very troubled waters to get to the point where they are releasing data. But the point is highlighting that perhaps as an opportunity cost in general around this kind of stuff. Any review of the appropriate data of the comparative analysis of budgetary data, we found that a whole bunch of different technologies were at play. Different technologies, different platforms, different, a whole bunch of different stuff. Each technology was providing data in a slightly different way. The geospace, we also see a whole bunch of tools and technologies is a gazillion different tools for different jobs. And that maybe that's a good thing that's a bad thing. We'll see. The expectation of the hackathon team was not that they would find exactly the same thing. I think that would be unrealistic, but that they would find maybe different dialects of the same language, you know? The things that are common enough that you can mash them together in a meaningful way. I'm a geoguy. I knew that was going to be the outcome. I looked at that. I thought that's a great idea, guys. You should do that. Secretly wondering if they would have some special sauce that I hadn't seen before that I could steal from them and use in my work. I thought, this could be a really cool thing. Maybe they've solved the problem. But being personally quite validated by the fact that it didn't work, you know? And that, you know, this is the problem I face every day and thank God I haven't missed a trick. You know, the real thing is that the barriers to this problem are many and complex. They're human barriers. There's technology barriers. There's technology environment, security, FY licensing vendors to consider. There's a bunch of consideration. But I got me thinking. I got me thinking hard about, do we actually care about any of those considerations? How long will it be before you move on to the next piece of software serving or disseminating your data? When will the next high-speed internet format come out? It's worth considering the process of just publishing an open data where at the website, you know, just because you can. Maybe that's not such a good thing. With this in mind, you know, the real value of data is, of course, the data, not necessarily the technology housing it. This is an important thing. Or indeed the software supporting its distribution, it's the actual ones and zeros. It's the data. The values in those tables, more so, and the value of each data point increases every day as well. As the temporal depth increases, the amount of actual value increases too because you have more information. I mean, Landsat, for instance, is hugely valuable data set because of its longevity. And that happens entirely independently of the software or the technology. That happens because of the data and its age and the consistency of its capture. So we should make sure that we are capturing and publishing the right data because if we're not, then again, we face this idea of the opportunity cost to our investment in that data. So back to the hackathon. Context is really interesting. Context is a really important thing. Without context, you get a skewed impression of what our world actually looks like. You might be confident in knowing that your little piece of the world is just right. But unless you have a good idea of what's happening around you, you kind of end up with a map chicken. You end up with this idea that, you know, you've got your piece right and I don't really care what everyone else is doing. So you don't have this idea of context. And this is an, you know, an extension of this is the idea that we should generate an enormous value to our data by publishing it in commonly understood manners. So let's take cats, for instance. The University of Abster did a wonderful study. They found that there's 14 billion images of domestic cats on the internet, of which 2.7% have bred around their heads. Indeed, there's only 220 million domestic cats in the world, which is, which leaves us with a problematic situation, that there's 65 pictures of every single cat on the internet. What's the point here? The, you know, cats, what's the point? The massively popular phenomena of cats on the internet is the combination of cuteness, convenience, and compatibility. Think about it this way. Each cat data point is commonly understood by both the computer and the person. There's only a few popular image formats. And in the most part, they're well documented, well understood. The ability to take a picture of a cat is somewhat ubiquitous. It's easy to do. And these data points are perhaps just slightly different dialects of the same language. So they're easy to share, they're easy to manipulate, and they're easy to reuse. Oh, wait. Isn't that what we want from open data? Consider the multiplication factor that we had with the temporal nature of data, and then consider what the network effect is if we commonly publish the comparable datasets, if we understand each other with different dialects of the same language. This is the data utopia I think we need to strive towards. It's easy? No. No, this is really hard. This is actually really, really hard. But what's the first easy thing to do? This is, what's the first easy geo thing you can do to make your data readily available to everybody else on Earth? Easy thing is to do that. Publish your data in two well understood projection formats. I'm sure that your local conic conformal measures area way better and it's got better distance, but the rest of the world, the rest of the web mapping world who want to join things together, they care about Web Mercator. We can beat up a Web Mercator all we want. That's fine. Problem is it's there. It's a reality. So it's typically either a button push or a single line of code to also publish your data in a commonly understood projection system, to get it in a commonly understood manner that anyone, if they want to, can just say, hey, whoa, yeah, I can consume that into my web map. It's the same kind of thing as this thing. I can get roads from Alberta and I can get roads from BC and I can have Western Canadian roads. This is awesome, you know. The hard bit here is not the technology. The hard bit here is actually the advocacy and the willingness to committing to what I'd like to call a commonwealth of data. A commonwealth of data formats, a commonwealth of sort of data lumps that we can all access. I think that is the key takeaway here. My point, the key thing is that, for instance, every individual municipality's data becomes more valuable the more it can be commonly understood in the context of other municipalities. I keep on beating up the municipalities. That's not really fair. I just mean entity that publishes data. Let's say that for instance. Companies could also be doing this. Every province, territory, state, entity, company, data becomes more useful when it can be placed within a much bigger context. In short, I propose that we congratulate ourselves on making a huge leap forward in publishing data, but we start thinking a little bit more about what to publish. We start talking to each other and ideally kind of try and publish the same thing. And there's a picture of the cat I find on the internet. I thought you might like it. That's me. Thank you very much. So, you said the publishing 4326. Isn't that just the technology of the day? Where do we draw the line? Where do we draw the line? In the world of different shape? No, I mean... Yeah, I agree. I agree that 4326 is, well, and 357 to some extent, they're sort of indicative of the technologies that we're using right now. But I think also a lot long in general terms, and the WGS84 in general terms, is probably not going to disappear until we have a different shaped earth because it's the most convenient way. And frankly, we're measuring latitude and longitude as our kind of de facto global measurement system for the globe, I guess. So, you know, if there's a better one, awesome. Let's present it. Let's get out there. But I'm not sure there is right now. And we could probably blame, you know, less blame open layers for the sake of it, but we could also blame Google and Bing and all these other guys for joining together and doing the same thing. Or we could say that's an awesome approach and now we can all publish our data in the same thing and no matter what manner we want to display that data, it's readily available. And how about compared to, for example, the OTC standards, are they ubiquitous enough to be considered the language we should choose to support until the earth changes? Sure. But I'd also argue that the OTC can provide their standard, but we could spend an awful lot of time jumping onto that standard and doing that. Or we could do this thing that's going to work on the platforms we have right now. So, I mean, there's a pragmatic piece here which is an easy thing to do to get your data to everybody who's using a web mapping application which only understands one of a few projection systems is to press that button is to write that line of code that says transform as and cache me. It just seems like a very straightforward approach to getting over the hump which isn't necessarily an OTC hump. It's a global kind of use of data hump in that we want more people to share more data. I think that's something that we'd all like to see. And a quick way of doing that is publishing it in a commonly understood projection system. So, you mentioned at the beginning that the municipalities didn't just have different data formats, but that the numbers meant different things. And so, I see the, you know, the publishing standards as a good way for normalizing the publishing of that kind of data. But how do you get municipalities to start tracking the same numbers and talking to each other in the same language? Yeah, we can talk into each other's magic. And also, yeah, there we are. And also the hardest piece of the puzzle. I mean, when it comes down to it's a human decision of what data you track. And I think the open data trap is, hey, it's easy, we got this thing, turn it on. And the harder bit is where you think, okay, we should actually have some kind of understanding of what is commonly useful to the community. And maybe that involves the manipulation on the municipalities end. My experience is that typically if you make it harder to release open data, it typically doesn't happen to quite the same velocity. So that's a risk for sure. But I think the network effect of people talking the same language and being able to be sort of some level comparative with each other is enormous. I think there's huge value there. And I think each individual municipality or state province country can actually leverage that themselves. I think there's a value to them as well. So in terms of the sort of interoperability you're talking about with data from multiple municipalities, multiple sources. What besides the SRS, what parameters are you running into? Because I probably don't have the greatest grasp of this problem. But what are the parameters? The key thing is that people publish different stuff about the same thing. So they might call it the same thing, but it's an entirely different entity. So the columns are different. They hold different information. So in essence, it's comparing apples and oranges. So it sounds like what you're really talking about is developing a standard set of ontologies. Yeah. I don't use those words because they're really long. But other people do who are trying to develop those standards. So it's a semantic understanding of what data would be useful in the community and blubbing ourselves into that. Thank you very much, guys.
|
Up in the frozen wastes of the Northern British Columbia, we organized a hackathon. We based it on the ideas of open data and civic applications.Our hardy hackathoners pulled together a number of excellent ideas but met with a constant and obtrusive barrier: that open data maybe open but with out some level of standardization its not actually very useful.Now, no one said that data had to be 'useful', and perhaps if we want the technology utopia of real open data interoperability we will need to "build it" ourselves, but it is worth noting that talking the same language as our neighbours is generally awesome. Indeed, perhaps rather than swearing fealty to our technology overlords and just pressing the "publish document to open data platform" button, we could think about the commonwealth of data. The value of any data increases wildly with density and open data should be more valuable!The cats? well you'll have to tune in for that bit.
|
10.5446/31723 (DOI)
|
Hello, I'm ready to start and talk a little bit about OGR and an extension for OGR and doing ETL with OGR About myself my name is Benjamin Calvader. I work for SOS Paul, a space-based company doing mainly development for QGIS and providing services support for QGIS and We use as almost everyone uses OGR GDL and That's why I'm talking about OGR as well here ETL is Extract Transform Loads everyone doing GAS is working with geo data there are quite a few Applications available for doing ETL there are certain applications on a higher level like hail, geocatel, talent, geo extensions so you have a desktop user interface where you can make models for your data transformations and so if you want to work on that level then I Would suggest to have a look at geocatel. I like that one very much But in my case I need like a library which is embeddable in QGIS for instance, so this is not an option for me. I want to build Lower level libraries GDL OGR is one of them and There are other possibilities you can do a lot of things directly with Poch AS or if you're handling XML files You can do XSLT There are many possibilities But For us it was we wanted to build on OGR so I explained first how you do transform data with OGR there is OGR to OGR the program that executable which many people know there is even a desktop GUI tool for Helping with that and there are plugins and things helping with using OGR to OGR and To understand how it works you have input data and then you have options which apply to the input like applying an SRS or a layer name or a layer type geometry type and then it is read or Then it is the file is read and it is in the OGR data model and Then you have output options the most important one is the format you want to have and you can transform it or apply filters and so on But it is important to understand what this OGR data model supports it is quite a simple model it is Attribute fields so number of fields with types it uses a feature identifier which you can Which is configurable and it had for a long time one geometry field and now since 111 it has multiple geometry fields. This is an important improvement and it was financed by Swiss Agency because of that project and because of support for Swiss format interleaves which need support for multiple geometries per layer Next step next level when using OGR to OGR is using the OGR virtual format this is not so well known and This is good documentation on the OGR site and it was made I think Frank can correct me to create spatial layers from from Text files means like CSV or something or flat tables without Geometry format but with let long coordinates or something similar and then you can Make a configurate file like the XML down there where you can there you can tell which fields or which attributes are forming the geometry so the X and Y attribute and using this you can look at At the non spatial layer with spatial tools and If you look at this graphical representation you see that this VRT is the input for OGR to OGR but it Points to the to the input data and it replaces these input options for reading Data and you can apply the other OGR to OGR options for creating our data or another format This virtual format has grown over the years. It has many possibilities many hidden possibilities or not so well-known possibilities you can Obviously define the geometry fields you can change column names You can change column types or map them You can omit columns and you can apply spatial filters. You can reproject layers and and you can instead of having Fixed input layer you can have as well OGR as well expression which could be native SQL for postures or it can be SQL applied to shapefiles even Which are then executed by OGR and this gives you the poly possibility to have like calculated fields fields with expressions or doing joints and other things so this is quite useful and We build on on that a new stat for For our purpose and now what I want to show you is this Python extension we using the Python API of OGR and and we collect some functionality of OGR info and OGR to OGR and but also Python implementations for which are included in the OGR distribution We provide a single binary for command line usage which makes More uniform the usage and we have a JSON configuration file Which has Similar content as the VRT just shown but with additional parameters for like the OGR OGR to OGR Command line parameters and And in this OGR tools also included is a QGIS plugin and That wanted here is that because OGR is already included in QGIS. There is no additional dependency, so if we use the Python API of OGR, we don't have to run an external Executable, but we have we can use all the functionality of OGR or via Python API. This graphic should explain how it works, so The new part is on the right side is this OGR configuration, which is this JSON file I mentioned and from this temporarily created VRT is used for as an input for OGR which points to the input data and input options and output options are included in this OGR configuration, so the idea is that you Can put your OGR to OGR parameters in one configuration file. That's one idea So this is a Python library and it is installed like Python libraries Usually are installed with its PIP install OGR tools and then you can use the command line I Would like to show it on the bash directly in the console But I have to switch first Because I should see it as well So So I've installed it obviously and I can call this OGR executable which is also a Python file and Here you can see the commands which are available. I can look for a version of OGR the OGR library installed Okay, you don't see the button Yeah, that's good enough, okay What your version or your formats think you can do with regular OGR commands Oh, where is my script? Sorry Okay, just a second Okay Next thing is I want to have something like OGR info I have some test file included So that's similar outward or the same output as you get when you use OGR info as Executable But this one is implemented in in Python which wasn't done by me which is also included into OGR distribution There is a Python implementation of OGR info and I'm using this one Another feature is Applying as OGR SQL expressions on data. This is all the possible with OGR info, but my goal was to have one Point or one entry point for all these kind of commands so I Do an SQL expression on the same on this shapefine Okay, now the SQL query so I Had the info so I know I know which attribute I have So what I do now I do Select operation you can see it. It's bottom I do that first oh, huh railway is that correct I Need type or here I go to the info back here I see the name of the layer and see all the the column names So I made The layer name is correct So did I make a mistake here type was my name? That's I do OGR SQL which is the same as a This as you can do with OGR info. That's what I do and what it should do In my example I've even something Like that Hmm I can't see the mistake right now, but I Hope you believe me that you would get The same as That's when you do OGR info It is select I think a query Country memory is it Yes on the OGR site you have documentation what kind of SQL you can do and the SQL is Interpreted differently when you do it on sources like shapefiles, then you have limited SQL support and we do it on on post keys it is native approaches SQL or on SQLite or It's also native SQL Next thing which is included is helper for generating this VRT files Which is useful is for one to Oh that was So that was also that the SQL problem What's that We know that that's that's the same output as OGR info gives you if you do a nice call query on on a OGR input type and So you can add where close and so on so the problem was this from File name and if we do this OGR VRT we get A valid VRT Which does include the same information as OGR info has which gives you a starting point for changing data types or changing Field names and so on so this is Useful is if you want to work with VRTs And next step is now to have a configuration file for doing this OGR tools transformation. So what I do is another command Generate config which is similar to this VRT configuration generator. I Tell what What the preferred destination format is so this is different to this VRT Because I want also to have the output parameters of OGR to OGR included in the configuration file I don't have to convert to this format, but it's It's optimized that default parameters are optimized for the this default format So the same again I use this shapefile. Okay, so this is how the This configuration file looks like and Some parameters look familiar. So this are OGR to OGR parameters which are applied When I execute the transformation and which can be changed Your source format and destination format we have The same thing as we had in in the VRT the field types and so on and We have geometry type layer name and Geometry field specification Juniper empty but I ignore that for now I write that into a file Or now let's call it configuration And now I can use that for a transformation so next step is OGR transform Ah look strange, yeah But here the executable helps I can always press enter and look what's missing so I Add format I Want I don't have to add because I go to geochasing first At the input file And the output file and it's the same order as OGR to OGR destination and source so the output will be real way to chasing Oops a little mistake. What is it? I Forgot the configuration fine This one is not optional so what I got now is railway.json Which is the geochasing file Made from the shape file What does this mean Good, okay And to play a little bit I can edit the configuration And let's say I saw that this was MIT was Floating point number which makes not so much sense So I do the same again with this new configuration oops Geochasing driver that's originally OGR to OGR does not overwrite existing files So either I had to add the options for overwriting or The easier for now is to remove the file first And what was it railway chasing? Tell me that I should have used something else. Yeah, but I was quite short Integer was correct I Had it capitalized before I Didn't I learn okay Oops I was too lazy And now we have to OSM ID as an integer See there But that's more or less VRT functionality we're using here And What I can do as well is doing the reverse transformation Automatically that's another option of OGR transform Oops So you have this reverse flag Which interprets the the configuration file the reverse way and And help just doing the reverse config transformation so I go back to transform and So this was the input Now I do Another Output file with another name and the input is now the JSON the geochasing I have created and I have to know I don't have even have to give the format, but I could also use another format I only have to Specify that these are worse Whereas config, okay, and now I should have New And 73 features so it looks correct Exams like this are on on the GitHub page of the project I switch back to the last slide So that's the GitHub page all the sources there and there is quite some documentation there in the in the read me but This is an ongoing project and if somebody's interested and Things that's useful. I'm very open I'm open for any help. So what could be done is Doing something similar like Fiona on the good side Having including a more path on a Ogr API Not all Ogr the Ogr functionality is included yet Then it would be nice to make table showings easier star. You can do it with as well, but it's not so easy the Python API itself is Is not documented yet. Yeah, there is no documentation for the payday Python API, but only for the configuration file and executables and as I said, it has a Qtis plugin, but this Qtis plugin is specialized on on the Swiss Intel is format and that was the reason for this project and It would be very interesting to have a more generic plugin for doing Ogr transformations started from Qtis without the need of an executable which is always a little bit difficult It is platform dependent and pass passes must be correct and and the libraries must be found and This would make this easier Could also be interesting to integrate this in the processing functionality of Qtis which would Then you could use the model builder of Qtis for using for doing things like that Okay, time is over. Thank you Maybe one question. Yeah, yeah So I remember reading Gdall roadmap that there's Eventually some consolidation of the Gdall and Ogr sides of that project and some talk about changing the way the binaries function, so is that something that's You know anything about that might be happening sooner that will affect This project here It has already happened for a big part it is implemented But it doesn't influence this very much. It's I mean it's still an Ogr API and a good API I'm not sure but I think that the Python API will will stay the same. I don't expect big changes there so Doesn't influence the project This is this Python part Yeah, maybe this this command line interface has to be even more generic that it even covers roster formats If you Be another step, yeah Thank you
|
The ogrtools Python library lets you run complex ogr2ogr operations defined in a configuration file supporting transformations of all OGR vector formats. It uses the OGR Virtual Format (VRT) internally for transformations like renaming tables or columns, calculating values and converting data types. Since version 1.11 of GDAL/OGR, multiple geometry columns per table are fully supported for major data formats.As a pure Python library using OGR Python bindings, it has no additional dependencies and is therefore easy to integrate in other applications like QGIS.https://github.com/sourcepole/ogrtools
|
10.5446/31724 (DOI)
|
Yeah. I'm pretty sorry for the technical problems I had to solve, actually. My laptop has a very strange kind of HDMI connector. Of course, I led it at home in Prague and there was supposed to be a guy here in this room helping me out with the problem, but he apparently didn't arrive. Finally, this is the talk. This is the presentation about PYWPS project, PYWPS Thoughts Report. You probably heard several details about PYWPS in the previous talks. So just from my perspective of the project is doing and what we're up to. Something about myself. My name is Jachim Cepitsky. I'm a member, among others, of the board of directors of Open Source Geospatial Foundation, so called OSGEO, and kind of president of the local chapter of OSGEO, we used to call ourselves Open Geo Infrastructure, our association of Open Geo Infrastructure. I've been involved for longer than 10 years in the development of the open source software for geospatial in general, both on desktop server client side. Currently, I'm working mainly in JavaScript environment. And today, I'm talking here on behalf of PYWPS, how we used to call it, PYWPS Development Team. PYWPS started in 2006, early enough to be first time presented at the Phos4G conference in Lausanne in Europe, in Switzerland, and it is the obviously implementation of OGC WPS standard on the server side and is written purely in Python programming language. Current version is distributed under the GNU GPL license, version two, and the new version of PYWPS, we would call it four, PYWPS four, is being done under MIT license. PYWPS is one of the OSGEO so-called LAPPS project, which is intended to be the LAPPS kind of umbrella for smaller projects, which do not have all the infrastructure or do not need all the infrastructure like the big OSGEO projects, like Project Stream Committee for example and stuff like that. But we still want to be part of the big open source geospatial families, so we are in the LAPPS. PYWPS is also large or part of the larger so-called Geopython community. You can find us on github, under github.com slash Geopython. And as you can apparently see, that PYWPS developers are the most handsome guys all around there. Keywords. So if you say PYWPS, what should you imagine? This slide is actually based on presentation of Bastiaan Schaeffer, the original author of 52 North VPS. WPS, sorry, he was explaining the features of 52 North WPS and how many features in it. He was comparing it to pretty good car. And if I would use the terminology of him, PYWPS is rather bike than a car. It's a little small bike actually, modular, of course fast. Based on the previous presentation, the PYWPS isn't fast in terms of running on the server. It should be pretty fast to get it run and set up on the server. And the feedback from the users was pretty positive so far. It should be easy to implement, to get around, as I said, to set up to write around process. And I like the word slick, so it should leave as low spot in your system as possible. And there is a bunch of accessories which you can use in order to plug into PYWPS. As mentioned, it's written in Python programming language. And among, hopefully, under other implementations of the standard, as I already said, PYWPS is known for simplicity regarding installation and setup. The installation is usually a matter of two minutes. And then after then, you can write your own scripts, your own processes. Yeah, and as I said, I didn't say it, but the scripts are then implemented, interpreted as processes inside of PYWPS environment. Since beginning, PYWPS content support, direct native support for grass, just modules, people are asking, does PYWPS run without grass? Yes, PYWPS doesn't need grass at the end. Another support was there from the beginning for GDL-OGR or GOODL or GR or R itself, so it's the physical language, and many others. Basically wherever there is Python binding from some library, you can use it because you are in Python environment. Now, you probably heard rumors about PYWPS 4. This is supposed to be a new start of PYWPS Live. It's supposed to be restart. We started to write it really from scratch. As you saw, we changed the license from GNU, GPL to MIT. We could afford that because there is like no single line of code which has been copied from the Yield project to the new one. Why? In 2006, when we started to work on PYWPS, the world of Joe Python, so-called Joe Python, was different. There was no, for example, grass, Python API as it is today. Python was in its version 2.2. Now, we have Python 3. There were or with you, we have to deal with XML files, large XML files. We have to do basically everything manually. There were no libraries which would help you to deal with OGC services on the client side, for example. The most used format around was S3 shape file. And people started to talk and use GMA, just started to talk about GML and to use it. Today, we have Python 3, as I said. There is native support for Python in grass. There is a grass Python API. A bunch of other projects do have their Python API as well. There are new projects like XML, OWS, or workzoic, for example, for the services which are at hand and which are pretty helpful to us in order to get things done pretty fast way. There are new formats, popular formats like GeoGIS and TopoGIS. Has anybody of you guys heard about KML recently? Not that used anymore, but still, it's a new popular form line in that time. And all these things has to be considered. So we started, as I said, to start from scratch. We started to create a roadmap for PYWPS4 and actually we already defined a set of features for the PYWPS4 one. And you can actually find it on GitHub. This is the fastest way. You just click on milestones or actually, yeah, roadmap, milestones that you are in. Very sure. What do we have already? There is some code already in the repository. Validating. So far, PYWPS3 doesn't validate anything. What you send in is what you, what the process has to deal with. So if the input file is somehow corrupted for some reason, PYWPS doesn't take care, doesn't pay any attention to it. Everything happens. Every error message, so to say, happens on the process level. So actually, essentially, when usually for OGR library isn't able to read the vector file, then it fails, but not earlier. Server implementation is based on work. So it should be popular library for server-side application creation. And yeah, it seems to be pretty good choice. We have new IO or input output handler and we define some universal object. Yeah, there is a new universal object IO handler which performs transparent transformation between data stream, file object, and so-called in-memory object. So we have basically one, one, why? As I say, I want one data type object and we can switch between various, yeah, appearance in the system because some of the tools you need to address, they are expecting file name. Some other tools are expecting actually opened already file object and so on. So there is a way how to do this inside of PYWPS in a transparent way. File storage. Currently, this is something you need if you are dealing with data and there is always some big, or not only big data, but there is always some file at the end of the process or usually there is a file at the end of the process. You have to deal with it. You have to store it somewhere. Till now, we are assuming you store it on your local drive, on your local hard drive. But what if there would be a possibility that the process stores the resulting data to post-GIS database? What if you want to send it to FTP server somewhere else? What if you have your Dropbox account and you would like to send the files there and so on? What should PYWPS4.1 contain once we are on it? We should support output through GeoServer REST API, MAPServer, MAPFile or QGIS MAPServer. I think PYWPS is one of the first implementations where we started to talk to, so to say to call, to talk to other projects and using their web services, actually MAPServer, a MAPServer session to distribute the final result of the process. If the result of an interpolation process was a REST file, the result of the process was linked to WCS service. So the client could then deal with it more easily. There will be administrative REST API interface, so we hope we will have something similar to GeoServer where the administrator of the whole server can, yeah, configure it. Then the administrator wouldn't have to go to the command line of the server and the simple stuff should be simply add hand somewhere on the web. Yeah, and as I said, currently we have only file storage implemented. There should be database storage or something we call external service storage like FTP Dropbox and other services. But what does break us? How come that we didn't do it yet? A team is currently out of time and there are, so to say, no external resources currently in order to be able to move fast forward. We have to confess that for open source project about this size, lack of resources is pretty critical. We are able to maintain current version of PyWPS fixing the bugs or accept pull requests, but heavy works on PyWPS are currently impossible. Even though, as I said, in Git repository, there is some code, something is running, there are some tests, and a lot, but really, to get things done, we need at least one guy working on it for a full time. The good news, man, this was the bad news, good news. This year, he had luck and we got four pretty interesting proposals for Google Summer of Code. I think compared to Gras and other projects, it was pretty much, Vasek, maybe you'll correct me, but there were like five requests for Gras project, I think, maybe seven, okay, four, therefore PyWPS, so this I call successful project. As a result, we obtained one slot. One student is working currently on process chaining on in the current version of PyWPS and we are looking for her to see her work in PyWPS four. Why only one slot out of four? Because OSG opens with Geospatial Foundation is actually de-mentoring organization for Google Summer of Code for the Geospatial, opens with Geospatial Domain. And of course, since PyWPS isn't OSG project, then the slots were used for OSG project at first place, so we are lucky for the one we got earlier this year. Thanks to cost framework, PyWPS had the chance to meet at the Code Sprint at Andre Tudor Research Center in Luxembourg in Europe. And it was actually the joint event for PyWPS and 52 North WPS. We hope, of course, that the next year, we, that the coming together will be much bigger and we would like to have zoo people and geosurgery people there with us because we had a lot of fun and also we could talk about the new version of the WPS standard as well. I would like to thank to existing at past sponsors of PyWPS development and encourage, of course, new coming sponsors to help with the development. As I said, there is a roadmap, so you can check what we would like to implement within couple of, next couple of months. The companies and the project supported the project, supporting the project, they, sorry, the company supported the project either with manpower or with hardware capacities, of course. And thank you. Are there any questions? Here's the mic. This is the first presentation without question. I mean, great. Oh, yeah, thank you. I have a shy question. What is WPS and why do I want to use it? Okay. Yeah, obviously this isn't intended to PyWPS topic, but yeah, in general, WPS stands for OGC web processing service standard. So what does web processing service mean? Do you have any, do you have any name or did you try WMS standard, for example? So you have some experience in that field and WMS serves images, maps, basically, or offers maps or images of maps. Web processing service standard offers so-called processes. So some just usually geospatial operations which are deployed on the server. And then the communication between the client and the server can, or the client with the server can talk about which processes are in offer. What does the client need in order to be able to run the process? And then the final type of request, so-called execute request, is about the client provides the data to the server and asks politely the server to perform, for example, interpolation, buffer, or climate change mode or whatever. And this, of course, depending on the input data and so on, it can take either a few seconds or several days. Depends again. And at the end, as a result, there is again an XML response where the output data are somehow pointed to. Is it okay? Yeah, okay. Thank you all.
|
PyWPS is one of the first implementations of OGC Web Processing Service (OGC WPS 1.0.0) on the server-side, using Python programming language. Since it's beginning in 2006 it was offering support for running scripts of GRASS GIS and other popular libraries, such as R, GDAL, Proj4 and other. Users of PyWPS can write their server-side geo-scripts and interface them on the internet using standard WPS interface.During last two years, PyWPS development team was discussing new features, users would like to see in this popular OGC WPS Server implementation. Users were missing for example proper support for multiple in- and outputs, advanced logging, more natural serializing, possibility to store big data to external services. PyWPS was never validating properly input data, as long as underlying libraries were able to read them.Also new versions of nearly everything are at hand - Python 3, GRASS GIS 7 with proper Python support, Fiona, Shapely, no need to write custom code, when OWSLib is around. New formats are now used for sharing of raster and vector data, for example Geo- and TopoJSON. They can be even validated, using json-schema. Python became The geo-scripting language since 2006 (now being slowly replaced by JavaScript).Current work on PyWPS 4 is split into several fields: New WSGI interface was written, using Werkzeug. PyWPS has now new core for in- and output data structures (LiteralData and ComplexData). New IOHandler base object can seamlessly switch between file-, stream-like- and in-memory objects.PyWPS - 4 contains validators of input complex data, which uses four-level of validation (None, mime-type based, "can read GDAL", schema validation) for XML-based format (like GML) but also for JSON-based formats (like GeoJSON). Literal data are validated on similar way.We are going to support MapServer, Geoserver and QGIS MapServer in the future for output complex data management and serving. Data are going to be stored in storages (new abstract class defined), which currently is file system based by now, but can be extend to remote storage (such as FTP or e.g. Dropbox), or to database servers. Possibilities of WPS-T are discussed as well.PyWPS - 4 will remain the old PyWPS, how our users do like it: small, fast to install and configure, fast to run. But with new features at hand, we will provide you with modern, safe, scalable tool, which you can use to interface the work of yours on the internet.
|
10.5446/31725 (DOI)
|
bundlefund 빨리 ements Hamä wo d instruction possible the So we think too self that we enjoy the time in the library so we don't follow us but we don't learn how to use our tool. We just want to have a black box so we can already know the application that is researcher or else. Phous4G 2009 portrayed in Sydney.. פemeTubeальныеodie伺 în 2010 Phous4G Barcelona сот<|fi|><|transcribe|> lapping 2010 investors, Finnish wn Brooklyn. an ic 2 to velop origine yorsun entreprene� GEOR flix 2012, so no release of Zoo project either. Then Zoo project 1.3 was released in phosphor G 2013, and Zoo project will be released in phosphor G 2014. So you can tell me but we are in phosphor G 2014, but there are many phosphor G all over the world as you will see in the next slide. And you can see me and my brother in law, Venkatesh Raghavan, Nikola and I in front of the big pond in Sydney. So I would like to present to you what we call the Zoo tribe, because we have a project steering committee, we are currently an OSGO project in incubation. We have Zoo supporters that I will show you. We are Zoo keepers and zoo animals, which are the phosphor G and phosphry and open source library. Because indeed many of you can think that WPS is made only to use GIS stuff, but we don't care, we are using WPS for doing everything. So let's speak about the Zoo tribal council. If you take a look twice at this list, you can strangely recognize that there is three OSGO board members nowadays. So there is Massimiliano Canata, myself and Jeff McKinna as the OSGO president. So from the inception there is also the phosphor G Queen, that I think everybody know, which is Maria Brovelli. Hirofumi Ayashi from Japan, so you can see also with the country we are working with, we are worldwide represented. Daniel Kastel, which is a guy which developed PG Routing, the PG Routing library. Jeff McKinna I already said. Marcus Netler is in the PSC from the inception. Marcus Netler is just a guy which developed GRASS GIS. Then we welcomed during the phosphor G in Bremen, Angelo Stostos from Greece, which is no member of our PSC, we are proud of this. And obviously the big mentor, the guru, let's say, from the project, which is Venkatesh Raghavan and which make all this story possible. So obviously now you know who is leading the project, but you have to know who paid to make this project happening. So you have, we have five company, five sponsors, but with the time passing we realize that it is far better to have knowledge partners and only money. Because sponsor will provide you only money when knowledge partner can provide you three year of human resources. They can put one student, a PhD student working on a project and using your software and asking you question and make you announcing your software. So we announce our software a lot. So this is our new design, we should have a new website soon. In fact the website is already on his way. So in Zoo project there is three different parts. There is a WPS server, so which is represented here like a cheetah, which is based on C, language, and which is able to handle your request and to run your service and then return the result to the client. Then you have a growing suite of WPS services, what we call the Zoo services. And then you have the WPS API that we will see later on, but which is a JavaScript API which let you implement services in JavaScript and we will see how important this can be. So Zoo kernel, Zoo kernel is a WPS reference implementation, as we saw today that on site, there is some issue. It was released under MIT X-Sense license, X11 license, sorry for my French. Indeed we would like to use it in some proprietary software, so we have to use MIT X-Sense license, but it's fair enough because many other software are using it, such like MAP server for instance. Hopefully we are able to run on every platform, which is existing nowadays. So Zoo kernel, I don't want to go deeper in details because it's a bit complicated, but Zoo kernel is able to pass your request, let's say, load a dynamic module, which from C can be a charred library. Load in memory is this library, it will bin the function inside this library, it will give the configuration input, output, you just have to fulfill the output and it's finished. So as you can see, writing a service code is as difficult as the three lines, which is here. And with these three lines, you can send, obviously you will have to remove the yellow from Python, but with this simple service, you can publish whatever as WMS, WFS, WCS, depending on your data input. So for instance, if you run this service by sending a zip file containing shape inside, then you will have an output WMS or WFS request that you can use, and you can reuse, we will see later on. So I think we can say that it's really simple stupid. So obviously we, I told you that we use the C programming language, so we use the C programming language because we talk to ourselves that it is wrong to use only one programming language, because if I have one Python code or one Fortran code which is working for 20 years, what should I have to re-implement everything from scratch? I don't want them to re-implement everything from scratch, that's why we implemented this in C, because every other language is based on C. So we are supporting, you can develop your language using all these eight programming languages. So C and C++, Python, you have the choice between Python 2.7 and 3 if your module are available. Fortran code can be run as a web service because we have a mathematician which code in Fortran and he want his Fortran code to run as a web service, so what can we do? Re-implement is Fortran code, no, just embed the Fortran interpreter into our C code. Then you have PHP support, Java support, Per support, JavaScript support and nowadays Ruby support from the new version. So obviously you can tell me it's great that your software can be used by many kind of programmer only various kind of programming languages supporting various kind of programming languages is great but speaking various natural languages is even better. This way your interface can be translated automatically. So we are now supporting English, French and Japanese language and this is a room in fact of the Barcelona on the left, it is a room of the Barcelona workshop. Thanks to the Gernel for every service which will return a vector data or raster data. You don't have to write one line of code for being able to publish automatically your result as WMS, WCS or WFS depending on the data type obviously. So here is an example of the use of WMS publication. We will see later on that there are few others example which are better than the previous one. We also have support from the exemption about asynchronous request. You know probably that in WPS 2.0.0 there will be the get status request but we think to ourselves that get status is a really great capability for WPS and finally we find a way to integrate it in WPS 1.0.0. It's easy, you just have to send the status location to a get status request and in fact rather than using the request get status which is not existing in WPS 1.0.0 we simply created a service named get status and this is the status location so you can poll and have an ongoing status information and since the new version you can also provide some kind of messages. So this way you will have hopefully in your web application you will have loading bar which means something. It's brand new. So then you have those services. So those services are the result of our work so anybody can develop his own services. Many people developed their own services but they never contributed back. Anyway you just have to know that as I told you we have the module which can be loaded into the memory and run and so on but obviously we also need metadata. So we totally separated the metadata information from the code. Totally separated so we have two files. We have your charred library for C for instance and then you have the zoo configuration file which define what kind of output is per default what kind of output is supported, what kind of output you can expect so we have the zoo configuration file and hopefully since the new version we have the YAML support so you can even write your zoo configuration file this way using the YAML syntax. So if I have to speak really about the available services there are so many that I cannot list them here. So we are mainly using GEDAL, OGR. We even have internal, obviously we have internal support for GEDAL and OGR in other case how we can publish our result as WMS, WFS, WCS or maybe I should mention that we are using map servers and at the end to publish your data using WMS, WFS, WCS why we should reinvent something which is already existing and running pretty fast like map server do so we just reuse map server. Same for GEDAL, OGR. So we also have GRAS thanks to soerangibers. All the WPS implementation can take advantage of the GRAS GIS software by using the WPS GRAS bridge which it developed some years ago. We also have support for SEGAL to do some triangulation Voronoi triangulation, Dolone triangulation. We are not really GIS people. We are just using the library which are available all over. We also have some pitch routing service, map server services, R services and something new I think during this conference we have also LibreOffice services because indeed we are using LibreOffice as a server then we have some small services. The API is available on the map mean services so you can reuse it in your own software and we are using LibreOffice to do some reporting thing. Indeed LibreOffice, I think everybody is scared about this LibreOffice server thing but since the creation of OpenOffice, when you have OpenOffice on your computer you can run it as a server then you can use what they called universal network object to communicate with this server to ask him to open a file search some string in the file, replace this string by this other string you can replace one image by another image you can create new graph simply by using the result of an SQL query for instance to print your graph. So here is an example of those services about PGA routing so it's also the better illustration I think of the usage of the automatic publication of WMS and WFS Indeed in this case we just have one service which is able to compute the shortest pass so to display the shortest pass on the map we will simply use WMS request then to have the details on your write we will simply use WFS and to compute the profile we don't send the GML coming out of the GetFitture request from which is available on the write but we are just sending the reference code obviously and WPS you can send the reference code in this example we can say that we reuse the same result three times once to display on the map the other time to have details and then we just send the GetFitture request so obviously the data stored on the same server so the GetFitture request will run really fast locally so then we have the zoo what we call the zoo API so zoo API is unfortunately based on SpiderMonkey rather than V8 but it was because the development of the zoo API started in 2009 and I am not sure that at that time V8 was already available anyway even in the name zoo there is Mozilla for the Z and it's open office and because at the beginning we thought that it can be great to run WPS locally or remotely, transparently so if you want to run something on your desktop the easiest way to go is probably to use the Xulranner application the most famous Xulranner application nowadays is just Firefox so you can create some Xpecom component that you can then query from your JavaScript language from your desktop to query locally your Xpecom module which can be implemented in C or in Python or in any other language so that's why at the beginning we thought that we can bring the zoo and as an Xpecom component this way you can run the same services transparently locally or remotely in 2010 at the end of 2010 we had our first Xpecom component running zoo and we was able but then we just stopped anyway thanks to this API you are able to call other services so you can tell me but it doesn't make sense to add chaining this way by using another programming languages because WPS is already supporting chaining I agree with you, I fully agree with you but obviously by using a programming language rather than simple XML you can add logic inside your chaining and I think adding logic inside the chaining makes a lot of sense so this is some example of zoo API usage because in fact you are all invited obviously tomorrow to see the mapmin platform the mapmin platform which is on this screenshot the mapmin platform is 100% based on WPS I mean even the user interface which is here, the HTML page is the raw data output of a WPS request and here you have some kind of classification of raster file then we use some other, we chain with another service to tile the geo-reference edimage classifier and then we reuse it into the map file here is another example of the geo-referencer module inside mapmin where we just chain, gdl translate and gdl wrap to geo-reference animation and then we do from the command line, we do on web server so what new in 1.4 version zoo kernel is not running as fast CGI we can run then on a NGA in X server we added few months ago a parallel download which make your services running faster all the download will start at the same time and the first to arrive will be the first to be served you have the yaml support, literal data we added the aloe value and range definition this way you can define more deeply in details what are the supported value we also added the maximum megabytes supported for complex data and I did internal OJR support to run on the in-memory files using the vse mem driver from the gdl library for about zoo services by now we just added Voronoi and gdl contour we also updated the bus vector op in such way that they can be used in the same way as the python vector operation we had the luck to be in contact with the LEDEM project which was an FP7 project which used zoo project so really soon you will have a longer list in zoo services thanks to the publicamondi project we also developed the zoo client which is a client-uwps API which make you able to call your wps services it is based on node.js and use moustache template by using the organ technology so here it's publicamondi it's an open data project, an FP7 project a project funded by the European Commission so this project is based on zoo by csw, rastaman, zoo server and secan we have also as I told you already the mapmin product which is 100% based on wps when I say 100% based on wps I am liar because obviously you know that when I am displaying the map I am using wms or wmts or wfs or wcs but I mean all the setup, all the interface almost everything is wps so I hope you are aware that as I told you earlier that there are many phosphorgill over the world and you are all invited from 2 to 5 December this year to come in Bangkok to see probably the same presentation again for the phosphorgill asia we will have great time there so thank you for listening merci pour votre attention d'amor à l'higato gozaimasu is there any question? I know the project is really simple so maybe no question I am kidding there is one question that's back I think no? no it works are there libraries as well for calling the wps in the browser I know that Openlayers has a wps client and 52 North is working on one but in my experience they have all been very dependent on which wps implementation you have been hitting have you guys been working on that at all? in fact currently in map meet we are obviously using some kind of WPS API which is available on the githem of map meet but as I told you just before we also developed for the public amundi project a specific client interface which is available also on our SVN server so yes we are working on it and we are expecting to have some kind of model builder which will make you able to create new services by using transparently JavaScript language but by using only GUI user interface to drag and drop and say this output will go as this input and so on and so forth will there be a method to take advantage of that functionality reusably not through GUI? excuse me so is the zoo clients purely a GUI project or is it? no no no no it's a basic client by now but on top of this basic client we are willing to build a model builder what we called in 2009 zoo logic initially but we never finished this work but hopefully now we can do it I could probably give you comment to that as well there is as you said pretty good implementation of the WPS service standard in open layers but in Fosforge Europe this year in Bremen this question was addressed by actually guys from 52 North and we all agreed on that there is a strong demand for generic OWS JavaScript library which is about to be developed so to say again you are more than welcome to join the mailing list on OSGO I think it's called OWSJS at listosgo.org and there we are discussing some, as I said something similar to OWSLIP which is used in Python some generic library for JavaScript then for OWS services and some people which developed the WPS inside open layer I mean Bart is also involved in OWSJS library development you are right I should answer this and one comment there is get status request in the standard in 2.0 version it's in the 1.0 trust me show me show me then any other question? no? so thank you very much thanks for your time
|
ZOO-Project is an Open Source Implementation of the OGC Web Processing Service (WPS), it was released under a MIT/X-11 style license and is currently in incubation at OSGeo. It provides a WPS compliant developer-friendly framework to easilly create and chain WPS Web services.This talk give a brief overview of the platform and summarize new capabilities and enhancement available in the 1.4.0 release.A brief introduction to WPS and a summary of the Open Source project history with its direct link with FOSS4G will be presented. Then an overview of the ZOO-Project will serve to introduce new functionalities and concepts available in the 1.4.0 release and highlight their interrests for applications developpers and users. Then, examples of concrete services chain use will illustrate the way ZOO-Project can be used to build complete applications in a flexible way by using the service chain concept, creating new service by implementing intelligent chain of service through ZOO-API but also by taking advantage of the publication using OGC standards. Various use of OSGeo softwares, such as GDAL, GEOS, PostGIS, pgRouting, as WPS services through the ZOO-Project will be illustrated by applications presentation.
|
10.5446/31726 (DOI)
|
So, hi everyone. Thanks for coming. Yeah, this first slide's black, so. That way I can talk and you don't get distracted. You have to pay attention to me. My name's Ian Schneider and I'm the tech lead of MapStory and I work at Balanced. First, I'm going to get real comfortable here. I want to thank MapStory Foundation for giving me this t-shirt and also the chance to attend Phosphor G and step out of my basement and into the light and light. I've been working on MapStory for about three years, mostly full time. I do MapStory's built on a number of components, so I also work on those as well. It's been really exciting to see it launch and grow and now, three years later, we're moving to the next plateau of technology. So, I want to start with a quick anecdote which is kind of a non-map story in the sense that it's not a MapStory that you find on MapStory, but it's a story about a map and it makes me think about our reliance on technology, new or old and how we're always building on what others have done. So, this is the Weminuch Wilderness, 1997, this is in Colorado. It's my fifth and final week of my geology field camp. So, I'm out somewhere, I don't know, one of these valleys and I've got my brunt in there and some colored pencils and TopoMap and this guy walks up and says, he's got a GPS unit and it's pretty cool. This was in 1997 after all and I think it cost $2,000, but he said, where am I? So, I thought, why doesn't that thing tell you where you are? I'm in this valley and I got lost. So, I had a paper map but he didn't know where it was on it. So, I looked out my map and I said, you're right here and you're about a mile from your field area, you have to go that way. I felt, you know, it was three in the afternoon, I didn't know what I should do but I figured he could make it back. I never heard of anyone disappearing in the wilderness so I didn't feel terrible. But, thinking about that years later, you know, I thought, I was so proud of myself. You know, I did my thing without a GPS. I had a compass and a TopoMap. But right there, I was relying on, you know, thousands of years of technology, right, the compass, the ability to do surveying. These guys hauled chains through the mountains and made this amazing TopoMap that I could reference and complete my task for the day. So, I think there's always a chance to reflect on how we're building on what other people have done. So if you know nothing of MapStory, one simple way to think about the goal is making the play button work on your map and removing all those superfluous map tools. Or maybe it's the YouTube of Maps with less fanatical comments. So, this is the official large statement which I can read. A more complete description is that this is an effort to build a new dimension to the global data commons that empowers people to organize what they know about the world on any subject spatially or temporally rather than encyclopedically the way Wikipedia already does. So how does this work? Registered users upload their story layers. That's what we also refer to as spatial temporal data. And combine one or more story layers and non-spatial narrative elements to create a MapStory that allows playback. So the focus is really on how the elements of the map change through time, not panning and zooming and checking out interesting areas that aren't part of the story. So we launched officially Open Registration in 2013, but the project has been in development since 2011 in the fall when I started. I initially started on some peripheral aspects and slowly became kind of the tech lead guy on all the different pieces. And we've started this next phase just recently. I talked about building on what others have done and MapStory is entirely built on open source with sweat equity from a number of folks here at this conference. Even in this room, I see one guy who, Matt Prio definitely deserves a lot of credit for MapStory and I know together we pulled our hair out together and had fun too. And I know there's a lot of dedicated users out here. Raise your hand. All right, a few dedicated users. But we're hoping to improve that. So standing on the shoulders of giants made this whole thing possible and standing next to great people made it fun. Technologically, these components are the sites built on top of GeoNode, which you might have heard about this week. It's basically a spatial data kind of portal that is built on Django, an excellent web framework written in Python. We have about 154 gigabytes of application and spatial data as of today in our post-GIS database. And GeoServer and GWC are used to cache and serve up about 2 million tiles a day on like max days to map clients built with OpenLayers and OpenLayers 2, I should specify, and GeoEXT. And the whole thing is running on a single Amazon instance. So there's work to be done there. But we've had successes. While we're no Wikipedia in scale, there's an active community that's building the comments and MapStory is often a victim of its own success. Some of these stories are making it out of the out-to-wire audiences and getting thousands of views, which results in millions of map tiles. When you press that play button, it's not just viewing a single map. It actually results in many, many requests. So if you visit and it's down, remember MapStory is a not-for-profit and be patient or donate. So some of these storytellers in the audience just recently appeared in Vox, Pew, Business Insider, Washington Post definitely brought down the site, and the Sunlight Foundation. And some of the storytellers that I don't think are in this room. Jonathan Davis took a bunch of open data from congressional districts and added data on parties that won elections. And this changed the perspective of red, blue, America. This story got picked up by Vox, Washington Post, Pew and Business Insider. Carl Phillips decided to map some municipal border changes over time. He's kind of a fantastic spatial data detective and dug up all types of interesting stuff. This spawned a partnership with the Sunlight Foundation and it was just featured again this morning in the Washington Post, which is one reason why I'm not going to do a live demo. Betsy Emmons, she took an interest in bike lane mapping and the spread of bike lanes. She actually has a map story about the spread of bike lanes in Portland. And another example of taking open government data and dragging and dropping it into MapStory. She got coverage in Street's blog but mentioned some of the comments noted that some of the bike maps were wrong. And this brought up the need for web mapping as a final statement to an invitation for peer review and collective improvement. And, excuse me, Nitin Gadia who's in the room has decided to map the evolution of his hometown, Ames, Iowa, and spawned a local initiative. Nitin is also a master data sleuth and finds all types of interesting things out there. So this first few years was really a case of learning by doing. The initial closed testing period teased out a bunch of bugs and resulted in some fixes and some enhancements. But the wider exposure of the world let us know there's a lot more that needs to be done besides just making the site not crash under heavy load. So the next plateau. In addition to some technical changes like building off of the next version of GeoNode, GeoServer and so on, OpenLayers 3, and contributing some of the modules developed during the first phase back to those projects, we decided we need, number one, design, design, design. If you're going to make it really possible for less technical but not less technical folks but who know the content to be involved, it has to be delightful and easy to use. We want to enable versioned editing of story layer data. This gets back to the Betsy Emmons case where someone pointed out there was errors and if the site would allow someone to show up and correct those errors and then present them back to her as a request for change in a GitHub style fashion, that would be a pretty amazing enhancement. There's volumes of public data out there. Some of it's outdated. Some of it has small errors that are easily reconcilable. But we need to be able to work together in responsible way to continually expand and improve the underlying data sets if we want to be the Wikipedia of maps. Mark Monmoneer, I should have figured out how to pronounce that, who wrote How to Lie with Maps, he has a great quote on this. Maps are like milk. Their information is perishable and is wise to check the date. So we're going to build on GeoGig and the work of LMN and their rogue projects and we're going to begin implementing support for spatial temporal editing interfaces. Now this is kind of tricky because you can imagine editing a static map is one thing. You click on the feature and change some vertices, maybe add a change in attribute, add a new one. But if you have temporal data, especially overlapping temporal data, then you really have another dimension of editing and control that needs to be introduced. For accessible styling, the existing interface was kind of a, you can do anything you want, create all these rules and hopefully you understand this kind of GISC interface for styling your data. Well, we realized this was causing a lot of problems for non-GIS folks. After reviewing a number of map stories and evaluating other excellent user interfaces for styling, it was decided to support about 15 recipes. That's the term I'm using for styling story layers and story maps. So the idea would be standard types of classifications for unique values or core plus and just try to remove the number of decisions that the user has to make. That we won't be removing the advanced styling capabilities but more promoting the simple features to allow people to get their stuff out there and hopefully simplify their experience and make the, actually make the implementation simpler. The other aspect is of the storytelling components. So I mentioned the pieces of a map story are story layers and also not necessarily spatial annotations. So you can annotate your map. For instance, if you had a map of US history, you could choose a date that was important to you and create some text or other somewhat rich media. We support minor embeds. We made some efforts of supporting YouTube videos into maps but found that synchronizing the videos with playback was quite difficult. But we'll try to make that workflow for authoring stories that span multiple temporal extents, incorporate rich media and have more rapid playback capacity. Currently rendering is all done on the server side which is great for very large data sets as it reduces the burden on the client. But for smaller data sets, you're stuck with the less reactive interface. The other aspect that's interesting is the multiple temporal extents and one of the concepts that I call XYZT key frames which is the idea that you can completely remove the pan and zoom buttons from the equation. And so if let's say you had a story that spanned multiple continents but needed to zoom into them at various times. For instance, I don't know, World War II. So you might put a pin in Hawaii and have the zoom for that time frame be much more cropped away. Then the next time frame, let's say, moves over to Japan. The map could zoom out, pan over to Japan and so on. And so the idea would be that not only can you control the pan and zoom but you could also control the change in time playback. For instance, some stories might have a very interval based playback to begin with but then you might want to step into an instantaneous type of playback for events that don't fit into, let's say, a yearly basis. Additional goals beyond this plateau that we're envisioning and looking for collaborators to develop include remote data streams. So right now when you put data into Map Story, you must upload it and it goes in there and it gets ingested and maybe transformed. And that's kind of a manual process. But we're envisioning it would be fantastic to support regular ingestion of remote resources. So that way instead of you having to constantly create, you as the user constantly create a shape file and doctor it up and then upload it. If you had a service that adhered to a standard or potentially even an FTP site that on a regular basis you could put updates in there. Four dimensional storytelling. So that's pretty much what it sounds like allowing 3D maps with time. The biggest challenge there, as many people know, is actually getting that data, especially if it's historical. It's probably lacking a third dimension. Mobile discovery and editing, just basically building out better support for devices. It would be pretty cool if you could make a story at a conference, for example, from your phone. And projection enhancements, for a number of reasons, but I don't recall them, the, we're basically stuck on Webmercator on map stories. And that works for most of the world like it does with many of the other online mapping solutions. But for people who have interests in Antarctic sea ice, for instance, they're kind of hosed. And those stories don't really work out well. And I pity the Scandinavians, too, because they just, yeah, they don't get as much support as they probably deserve. And finally, your idea there. You know, this is an open source project. It's, we would love to have contributions. And even ideas are good, or feedback. So this is a little bit short. I thought I might talk more, but I didn't. So thanks and happy map storytelling. I mean, in Schneider, boundless, Map Story Tech lead, and we're on GitHub at Map Story. I don't maintain any social media presence other than GitHub. So you're welcome to chat with me on a ticket there. There is Schwag, and I would invite Liz potentially to come up and talk. I'm not certain what I was talking about, but I'll be here to answer any questions. Okay. Well, we could get a question, then. Have you ever used it for like storm reporting, you know, after like a hurricane or something comes through and just seeing how that progresses? That's a good question. Yeah. I think actually Matt did one of the coolest map stories, I think, to date, which was Irene, right? So it had animated storm tracks, storm radius with wind direction, as well as precept from a weather service time-enabled WMAS. So that's the other aspect I think that should be mentioned is that there is the ability to reference remote time-enabled services. So yeah, you can kind of, I mean, that's the goal isn't to take your data and to make, you know, one layer. Look, it's animated. Some of the coolest mash-ups have been, I think there was a census obesity map that someone threw together with Target and Walmart stores. And so there's, you know, there's, you don't even necessarily need to provide the data. It's more finding ways to combine it and come up with something interesting. Sometimes it's hard to provide data, like, I mean, like historical precept data, for example. And that was actually to add on to that. That was part of the nexus for how and where some of this came from. So as an introduction, I'm Liz Land. I worked with the Corps of Engineers. We have a long history of partnering and building open-source technology to help with some of our problems, so grasses and example is something that we did about 30 years ago now. So exactly that is the Corps has a tendency to either a create a group of the library. Oh, sorry. So the Corps has a history of being in responses to disasters. And so we need to actually be able to edit and see things and see them happening from both the past and where they're happening in real time. So where we're not at right now and we really are eager to get there, God forbid that we have another major disaster. When we launched this, where MapStory was, Sandy and MapStory, we just weren't there yet. And you do technically see we don't have the immediacy, I would say, like after a disaster. We haven't had a disaster to have the immediacy of a response to test the tool yet. So God forbid that we have another immediate disaster. We don't want that, but that's also where you see a lot of the leaps in the testing happen. Eddie. I was tweeting out about some of your great points there. And I missed, can you talk a little bit more about the new features of GeoNode and how those are working in MapStory? Again, if I missed that. The new features of GeoNode, a lot of it's really bringing up to speed the layout frameworks that we're using. The initial implementation of GeoNode was built on some pretty old CSS. And when we went to do the MapStory redesign, there was a number of issues. There was a lot of improvements in search, user groups, which allows, you know, group permissions. So for instance, if a number of people are collaborating on a project, you could extend permissions to them. One of the features that was added to MapStory, which was nice with permissions, was the idea that before your MapStory was ready, you would want to keep it kind of hidden so people didn't stumble on it and go, hey, that's wrong. But while you're working on it, so it's kind of a MapStory in progress. So, yeah, you know, MapStory was a fun project on version one because it teased out a number of features that would have been really useful to have in GeoNode that are now there. One of the other aspects was search. GeoNode 2 now has a CSW in it and so provides better search facilities across all of the data. And the search actually scales better. Some of the issues in the first version we ran into pretty quickly once a number of layers started getting into the system. It was difficult to search. And then I guess, you know, there's other aspects. Jango is a rapidly moving project so we always have to keep moving forward with that to keep security updates and so on. Yeah. This is a great presentation. I'm new to MapStory so I don't understand it fully. And I'm just curious. It sounds like there is a, you touched on it there with the search capability. So if somebody is coming to MapStory and doesn't have a lot of data, they're not a data junkie and they don't understand, you know, what kind of geospatial data there is. Can they create a MapStory? Is it do you have data on the site that they pull in? And then is it also, just to get a better understanding, is it also a resource that people can bring their own data? It's both, yes. If you sign up as a user, you can create a MapStory from any existing public layers. There's potentially the problem that someone decides to delete that layer. That's the issue that we still have to resolve. But typically people aren't really deleting a lot of stuff. So, but yes, you can come in and some of the facilities when you go to create a map, the search is integrated in the Map Creation Facility and it actually supports trimming search results to the current spatial extent. So for instance, there was a guy putting together some serious stories and so he didn't want to search for stuff that was around the world, he wanted to search for Syria. And the other aspect is you can trim search results to this temporal extent of your existing story. So I found this great layer regarding this and I want to find any other temporally overlapping, temporal and spatially overlapping layers to combine with that. So there are facilities for keywords, abstracts get searched. So there's a lot of tagging facilities to try to get metadata on there. That actually brings me back to what Eddie said. One of the issues that I think many people find is people say, yeah, give me, I want metadata facilities, yeah, I don't like to use that. I don't like to put any metadata in. So one of the things we did do is we made it so you can't actually make layers public until the metadata is complete. So I have noticed some people, I think Ninten is smiling because he must be one of the people that just typed in ABC into all the fields. But that was an attempt to try to make people fill out relevant information, tag their data and so on. That's another aspect of GeoNo2 is that metadata ingest is supported. So if you do have sidecar XML files with metadata that's better than the auto-generated stuff, then you can have some useful additional searches. But yeah, so you can build out of existing data and you can provide your own. Percy? Can you ingest OGC services remotely or do you have to actually import them into the app before you can use them? So if the services are WMS services, they will be basically referenced. They won't be ingested. So as the map playback is occurring, basically we're altering the time parameter to that remote service. We don't have any support for ingestion of WFS right now, but that was one of the ideas of remote data ingestion. Although I think one of the features of GeoNo2 is harvesting, but I don't believe it's actually, it's only harvesting records. It's not actually pulling in the data, but it allows for better search. And then, what was the last thing you asked? Okay. What's the social workflow like? Is there like a map-starter librarian? Are there people who like to monitor all the data being added and weed out things that are duplicates? That's a good question. I mean, that's one of the reasons why the project was built on GeoNo, just because of its focus on kind of a social mapping experience, and that will continue to improve with GeoNo2. There is no official moderator type of workflow. There is the ability for administrators to moderate comments or even they can actually moderate maps and so on, but that's kind of so far been up to the MapStory Foundation members or community folks. There are facilities for flagging data. My initial concern when we went public was that people were going to come show up and start digitizing weird animated things that were inappropriate, but thankfully that hasn't happened yet. I think that would just be a lot of work. Maybe my fears were unfounded. But yeah, I mean, that's, data is always the challenge. You can upload the data and it can be completely invalid, but until someone calls you on it, yeah. It's a similar thing in the wide web, though. You can upload a completely hocus pocus website and no one's going to stop you from doing that. There's coasters and stickers up here. I think that will make someone's luggage lighter. Thanks.
|
MapStory.org is a community-driven open educational resource that lets people share and peer review observations about how the world evolves over time and space. It's built on an open source geospatial stack (PostGIS, GeoServer, OpenLayers, GeoNode) and aims to empower both authoritative and public participation in data collection, peer review, and storytelling. We want to use this session to debut a "new plateau" for MapStory that includes an updated user interface with new features, namely integration of GeoGit Ôcrowdediting' of data and XYT frames for MapStories (what we call StoryBoxes).
|
10.5446/31727 (DOI)
|
Thanks for coming to our talk today on the Mt. Possum. My name's James Dickens. This is George Raeber. We're going to be presenting this together. The Mt. Possum was inspired by this mount, which is the Pop versus Soda mount. It was made some years ago. And essentially, what they tried to do is mount spatial variation in the term used to describe Pop, Soda, or Coke. In order to do this, they set up a web-based survey where users go and provide their location and give their answer, and then they map it. This website or this map does a good job of highlighting a geographic concept that places important. Where you're from or where you currently live has a large impact on words you use, like Coke or Pop, or things you say like you guys or y'all, or even your cultural beliefs or knowledge that you're given. So what we wanted to do is generalize this map, give people the chance to make their own questions and get answers. So that's what we set out to do. We made an application where users can come create questions, and then through social media or through any kind of marketing strategy they want to use, get answers that are spatially located for those questions. We also tried to include many different scales and aggregation levels, which are going to continue to grow through time. But right now it's points and counties and states and countries, and a new thing we developed called Watercolor, which we'll talk about more later. Before we go into the back end of the application and the framework, we're going to kind of give you a preview of the front end and kind of how it operates. So we developed the application specifically for mobile devices at this point, because we kind of feel like that's how it's going to most be used initially. We used jQuery mobile. And the map possible kind of has two components. You can come and you can just visualize questions that have been created, or you can answer questions that have been created, or you can create your own questions. To answer questions or view, you don't need to set up an account or anything. But if you'd like to create a question, you do need to set up an account. So it's important when you're looking at things like this that your location, maybe not right now, is not important. But where you're from or where you're currently reside is more important. So we've kind of set up two ways where you can report your location. You can search and use the map centroid. So if you're from someplace that you don't live now, and you kind of feel like that has more of an influence over your thought process, you can use this method. Or you can just use the location provided through your browser. As part of creating a question, you do have to log in, like we said. And once you log in, you're provided an account tab where kind of the statistics for your questions are tracked. And right now, it's pretty bland. We have just a pie chart, so we did use in chart.js. But you can also see there's hyperlinks that will go directly to your question that you can pass in using query strings. But you can also pass in map types as well. So if you'd prefer your question to open with a different boundary, you can do so just using the map type. And kind of in the future, we want to include more visualizations, more ways to consume your data. So give people the ability to download their answers so they can put them in a GIS system or whatever format they want to do. But also kind of make it a little more friendly to use in the future. And that's kind of the gist of how it works right now. You can essentially come cycle through questions and answer them. You can send out those through Facebook or whatever using the link that's provided. And then you can create your own questions to kind of explore spatial change. So now I'm going to let George here talk about the back end and kind of tie in how the front end and the back end work together and kind of what technologies we use to make this happen. So I'm going to give you George now. Thanks. So when we set out to create this application, one of our ideas was to develop this software, develop this web application using all open source software. We're at the University of Southern Mississippi. And part of the effort was a learning effort on could we do this using GIS software and implement this idea that we developed here. And you can see the different components to the web app listed below and the different pieces of open source software, both GIS and mainstream non-GIS software pieces were utilized. I'm going to step through each of these different parts of the system and describe how they fit together. First, we have the database itself. The database has two different sets of data. I guess an artificial delineation between these two sets of data. But the data that the application creates as users interact with the application is the first set of data I'd like to talk about. So we've got a very small number of tables to keep track of the users and then another one to keep track of the questions that are generated, the user submitted questions. And we have both answers and responses. And we could have given different terms to those. But for the purpose of what we're talking about here, the answers are the list of multiple choice, the valid choices when the user encounters a question. So with the pop versus soda pop question, which we talked about at the beginning, those would be Coke, pop, soda, other. And then the responses is the table of responses by the users to the questions. And that one has a spatial component to it. So it records the point at which they submit as their location. And so those are those tables. And then the other tables that are in the database are the support tables. And most of these are spatial in nature. Basically, we've got a number of tables that correspond with the different aggregation levels that we want to be able to visualize. And currently, we have supported in the app counties what we call level one administration areas, the state level. In the United States, it would be states. But we have these for worldwide in the system. So if somebody answers outside of the United States, their data is still aggregated up. And you can see that response at that level. And then countries is also worldwide data set. We don't have these in the system yet. We're working on generating them. But we want to have multi-scale hexagons in the system so people can choose those values at different levels. The map that you saw earlier that we showed that watercolor layer, that's what we're calling it right now. And that is generated from the raw points. So there's no raw points, meaning the responses table. So no separate support table needed for that. We also have a table that keeps track of which map tiles need to be redrawn. The maps that are shown are tiled out. And at this point, we haven't actually implemented that. We refresh the tiles manually periodically. When a question gets a certain number of responses, the tiles move from being redrawn dynamically every time there's a request to being drawn on a less regular basis. Because first of all, you don't need to draw the tiles as often because a single response doesn't change the map that much as more and more points are drawn. And then in addition to that, the map tiles take longer to draw once there's more and more responses, obviously, in the database. So to draw the tiles, we've utilized a piece of software called Tiles-tash. It's a tiling creation, map tile creation program that was written in Python. It'll serve up your data in lots of different formats. There's lots of different providers for Tiles-tash, but we're utilizing the MapNIC provider. Basically, we'll draw tiles based on a MapNIC XML file. And the way that Tiles-tash works is that you feed it a configuration file, and then you start Tiles-tash running on your server. And at that point, you get the ability to access tiles using a URL like this, where you have your server, then your configuration layer. So your Tiles-tash configuration file can have multiple layers. And so you request for one layer, and then you request the z, the x, and the y for that layer, the specific tile for that layer. And we modified it only very slightly. Probably changed five or 10 lines of code. What our version of Tiles-tash does is that it allows the application to serve up tiles based on multiple configuration files. And each of these configuration files, for us, represent a question ID in the database. So the new URL becomes like the bottom here, where we have the server, the question ID, the map type, which are the different layers in our configuration file. And they can be the raw points, the counties, the states, and so on. And so when you're moving on the web app that James demonstrated, when you're moving from question to question or from map type to map type, it's just changing these values to request a different set of tiles. All right, the application server itself, which is what allows the database and the tiles server and the front end to talk to each other. That was written in Python. It's a Flask app that's handling requests in from the server. And it has routines for all sorts of things, like creating users, requesting the legend. The legend is generated using the Python image library based on the colors. So when a user sets up a question, he or she specifies what colors are going to be utilized for each of the answers that they provide. And right now, we don't actually allow the front end user to do that. But that's one of the changes that we'll make. Also, there's a few other housekeeping type things it does. It allows you to get the extent of the response so that the app can handle doing that. And then the important ones, creating the questions and adding responses that I'll go over here real quick. So when a user goes to create a question, this is the incoming request, what happens is that information's passed to the app, to the web service, the Python Flask app. And the Python Flask app handles creating a new tile stash configuration file from a template. It also handles creating a series of MapNIC XML files based on the map types that are currently supported. And so there's a set of templates that are currently supported. Right now, it's points, counties. Those are the ones that we're talking about, including the watercolor map. And this includes generating the SQL query that it'll need to go into that XML file in order to make it work. So here's a screen capture of the tile cache configuration file that's created. This is for question 25, which is the Coke versus soda pop. By the way, I don't think James mentioned this. We imported the data from that question, just so that we have a demonstration question. The other questions on the side are pretty young, and so they don't have as many responses. But anyway, this is the tile cache configuration file for that question. And you can see how it creates a location on disk to store the tiles for that question. And then it provides the layers, the points, the counties, the subs, the countries, the watercolor. None of the symbolization or the SQL to generate these maps are in this file. Those are in the map, Nick, XML file that gets created. You can cut off here. But here's the SQL that gets generated to create the points. And you can see over, actually, I don't know, but over just to the left of the logo there, you can see where it generates a symbol for each of the answers, each of the possible answers, Coke, other pop, and whatnot, and gives it a color. And that color, it's drawing from the database. What the user had specified is that color. And so this is pretty straightforward for all of the map types, except for the one that we call a watercolor. It's kind of like a heat map. Anyway, the way that I like to describe it is that if the sheet was a sheet of paper, and each map symbol will be drawn by kind of dropping a droplet of colored water onto the map at that particular place. And so as you get more droplets around a certain location, it'll be more saturated with that color. And the way it's implemented on disk, I mean, sorry, the way it's implemented in the web app, in the Python Flask app, is that these point symbols are created dynamically when the question is created. It specifies what the colors are, and it generates these small PNG files that will be used to symbolize the map and places them on disk so that they can be referenced when the tiles go to be drawn. And so that's just the aggregation of all the little dots that look like that on the map is what turns into that effect. All right. This is just what happens when a user answers a question. Of course, we know that information, whether they specified what their response was and where they were, is what's passed into the database into the responses table. And that information is then stored. But then on the front end, the client is responsible for redrawing the tiles. And on the back end, the tiles aren't always redrawn. So depending on how many responses there are, we redraw the tiles so that you can immediately see, if there's only a few responses, you can immediately see your answer. Whereas if there's a lot, then it might take a day to show up as your particular answer. So a future task is to automate this process so that automatically it figures out which tiles need to be taken out and then decide, based on the number of questions, when that tile will be redrawn, when a new tile will be created. So what we want to, Jim's mentioned a few of these things, and I talked about a few of these things during the course of the talk. We want to update or change in the future. Are the ability to basically create a number of questions and link them together as a survey. So that when you get a certain URL with a certain query string and you send that to somebody, they will see only those questions that you've listed in the query string so that you can tell them to answer a set of questions or one or two questions at the same time. I actually have implemented for a single question to tie it together to make it more like a survey. And the second thing here on the list is to be able to embed the maps, like in a Facebook post or social media or the ability to embed the map in a news article or a blog post. And right now you can do a little bit of that. Basically, you can automatically specify which, we demonstrated the map type and the question that shows up. But we also want to make it so that they can specify a specific zoom level so they can look at a specific area. And everything that the site allows you to be customized in the URL so that those can be shared and make it easier, provide some help for doing that. We want to continue to enhance and build the visualization tools, include more map types, more aggregation levels. Looking back at that pop versus soda pop question at the very beginning, they symbolized it using kind of a tertiary soil type symbolization of using red, green, and blue in the corners. And so we could provide something like that. The possibilities are endless for the different map types that we could include in our front end or our web app. Also, the ability to include different charting options and the ability to view results, both the map results and the chart results, based on spatial and temporal queries. So the ability to look at your data for a particular area. So if like James does a lot of work for different local government organizations and somebody might want to look at their question just for a particular area. Or they might have designed a question just for a particular area. And they might want to see the responses through time if something in the media changed the value of a response. So we want to provide the ability to do that. And right now the database keeps track of all that information so that we could do that. And then one of the glaring pieces that's missing right now is the ability to kind of hover over or click on the aggregated units and describe what's below that point, particularly to show the aggregation. So you could hover over a particular state and find out not just what the majority of people have chosen, but kind of a breakdown of that state's responses or that unit's responses. And then he showed the ability to look at your data for each of the questions that you'd created. We also want to add the ability to export the data for all the registered users. So they can export the data as a shape file or as an Excel spreadsheet or a common delimited file. So they would be able to use that data offline or however else they wanted to do it. And the reason why we're kind of going over this, if anybody has any additional ideas, that'd be great. At this time, we'll open it up for questions. And we have our contact information. And that's the URL to the website up at the top. So thank you.
|
This project, originally inspired by the pop vs soda maps (www.popvssoda.com) seeks to create a web application where any question can be asked and answered by anyone with internet access. The Mapossum allows users to visualize spatial patterns in the questions they wish to pose without the need to possess the knowledge necessary to create maps of their own. The application creates a spatial web-survey system that harnesses the visualization power of a web map to explore the spatial components of question. As a tool it has the ability to help users reveal a different dimension of spatial interactions, and provides more insight into cultural and regional interactions. To accomplish this we have created a framework that abstracts the creation of questions and the logging of spatially referenced responses so that the answers can be mapped as points, or aggregated at various levels of administrative or political units (counties, states, countries). The application utilizes PostGIS/PostgreSQL to store and manipulate the data for the questions, responses, and other spatial data needed to support the application. The information is served as Web Mercator tiles using Python and Mapnik. On the front end these tiles and other data are consumed using the Leaflet JavaScript library. Users have the ability to create questions and the possible responses to these questions, as well as query the responses. The presentation will discuss the framework in detail, and we will demonstrate the use of the application for various types of question Ð response collection scenarios. The application has potential to be used as a general data collection tool for those collecting data in the field. We are also seeking to include the ability to couple the process of both answering and visualizing responses with social networking sites. The Mapossum couples a web-survey system with the visualization power of a web map to explore questions that have a spatial component to them as so many questions do.
|
10.5446/31729 (DOI)
|
All right, so let's go ahead and get started. My name is Tyler Garner. This is a GeoNode primer, a high-level overview of the GeoNode application, which is a web application for creating and sharing geospatial data and maps. Again, my name is Tyler. I'm a web developer and geospatial analyst at Nobles NSP, go by Garner TV on the internet, most internet sites, and I'm a committer on the GeoNode project. So GeoNode's primary purpose is to reduce barriers that exist between creating, publishing, cataloging, and using geospatial data. And I could kind of talk to GeoNode, but I thought it'd be better if I start off with a high-level demo just so you can see what all GeoNode is and what it does. So this is demo.geoNode.org. This is a site that we keep up with or up to date with the latest stable version of GeoNode. So if you download GeoNode, this is what you get just out of the box, essentially. Here you can see when you first come to the application, you have a list of the latest layers, the latest maps. Every first class object in GeoNode has a layer or a list view. So here's the layers list view. And these are all the layers that are stored within the application. Users can upload new layers just by going to the Upload Layers tab and dragging and dropping their geospatial data. So each layer also has what's called a detail view. And so if you click on any of those layers from this list view, you will get this detail view. The detail view provides a map of the layer. But also you can take some actions on it. You can download the layer in all these different formats. So you have tiles. You can view it in Google Earth. You can download a KML, GeoJSON, et cetera, and that essentially replicates GeoServer writes of. Any format you can get out of GeoServer, essentially we just provide the link here in GeoNode shape file and whatnot as well. Also, you can download the metadata straight from this page. So typically I'd work with a TC211 format. But again, some desktop applications, working with the metadata is a little bit more complicated than in a web environment. So in traditional workflows, you'd have your desktop application where you'd work with your data. You'd have your cataloging system where you'd work with your metadata. And that kind of workflow has kind of been improved a little bit in more recent desktop applications. But from this web environment, you simply hit just Edit Metadata. And then you have kind of like a high level view to actually go through and manage your metadata. So it's much more kind of intuitive and streamlined than traditional geospatial desktop applications. And this is stuff that's really easy to do on the web, but could be a little bit more complicated on desktop environment. So the Layer Detail page also has just some generic information about the layer. So a title, the abstract, if it's provided, that comes from the metadata, the publication date. We also store all the attributes about the layer. So right now, you can see that there's nothing in there. But if this was numeric data, they'd actually have those statistics calculated. So it had the range, the average, the median, and the standard deviation. We optionally expose a Share tab so that you can share this layer page on Google+, Facebook, or Twitter. You can also disable this, which I see a lot of implementations doing. We have a rating functionality. So this kind of provides your users with a way to provide some feedback to the actual layer owner or maintainer about maybe the quality of the data. And then the average here is just the average of all the ratings that the layer has received. Finally, we also maintain a comment section for each layer, each map, and each document. So just to provide another way for your users to kind of have a dialogue with the layer owner. And this has been really successful for, usually, when I see people using this as disaster response geonodes where different organizations are working within the same infrastructure. And it's not always clear who's doing what or where the data came from or accuracy issues with the data or something like that. So this kind of functionality can become helpful there. Down below, we show the legend for the layer. We get that directly from GeoServer by default. In the latest geonode, you can actually disable GeoServer completely, but all this stuff will still work just with internal Django geospatial logic. Here you see a list of all the maps that are using the layer. So you can quickly kind of go from a layer to see if there's any maps that you may find interesting that are already using it. You can create a new map from the layer and also modify the permissions and the style all from this one page. And the best thing about geonode and Django, which is the underlying web framework in general, is all this stuff is completely extensible and completely overriding. So you can override just portions of these templates or the entire page itself and have a completely custom look and feel to your geonode, which I'll show here in a little while. That's about it for the layer detail page. We'll open a new map with this layer. So when I click that button, this just goes to the map viewer and then adds that one layer in. That's the only layer. Let me see if I have one already. No, this one's take one second. Here we go. So this is what the view will look like when it loads. It looks very similar to the traditional geospatial exploitation tools. You have essentially a table of contents on the left with some tools up here on the top. This is using a technology called GeoExplorer, which has a lot of interesting capabilities and powerful capabilities just right out of the box. For example, you can print your geospatial data and you can customize these printing templates as well to have, for instance, your organization's logo. But here you'll see just the PDF of what I have on the view. But you can add your title to it, your logo, additional information that you want to have. Also, you can add external layers to this or any other layer that's already in GeoNode. So these are all the layers that are currently represented in this GeoNode. And I have access directly to them right here from the web map. I can also edit the features directly here on the web map. So this is kind of like in traditional desktop environments, multi-user editing. You have to invest a lot in your geospatial infrastructure a lot of times. It has really complicated like SDE deployments and stuff like that. But again, this is something that's really made trivial in a web application. So you can just click here, edit directly. And it doesn't matter how many users you have working with this data really at any time. So I'm not going to save that. But another cool functionality is the ability to query this. So I'll say I want to query it by geospatial extent. I zoom into here. And then it should just have the single feature. I hit query. The single feature that intersects the current viewport of the web map. And like the rest of GeoNode, this whole experience is really completely customizable as well. Like I said, you have to know a little bit about the GeoExplorer program in order to know how to customize this. But it is completely customizable. And you can add additional, more complex logic into it. And there's also, I believe, some functionalities that we're not using that are also available in GeoExplorer. I think actually there is a style editor here. It's where you can edit the style directly from within this experience. Instead of having to do like an SLD or something like that, you just go in here and add these higher level rules that will actually change the style of your layers. So next is the Maps View. So these are all the saved maps within the application. Again, traditionally just a collection of layers. All it is, each one of these maps also has a detail view very similar to the Layer View. You can download the map, edit the metadata for the map, set permission. So it has all the same permissions as layers do in GeoNode. But overall, a very consistent look and feel between the rest of the application and consistent functionality. The ratings and the comments still apply here as well. Next, we also allow users to upload and store geospatial or non-geospatial documents. So this could be PDFs or PNGs, pictures, Excel files, stuff like that that may still be interesting for your users or maybe like a PDF that you use to actually create a geospatial layer. You can upload that PDF in here. And then you can actually link the PDF to the geospatial layer. And so then on the Layer Detail page, you would see that this document is related to this layer. And I'm not going to actually upload a document now, but you understand. So this is really helpful, again, for disaster response type situations where you have a lot of organizations producing PDF maps. And you may or may not have access to the actual data. So this is just another way to have everything in the same architecture and same infrastructure. And again, it's a first class object in GeoNode, so it automatically has all the same permissions and all the same functionality that all the normal layers and maps and other objects in GeoNode have. All these list views have some filters, so you can spatially filter everything. Of course, not the documents, but you can also filter by categories, the date and keywords. And you can also sort them based on just common sorting, American, whatnot. Finally, there's a list of all the people that are currently within this GeoNode instance. And so the cool thing about GeoNode is that all these actions that occur in here are all stored. And so you can get an activity feed of anybody. Let me see. Admin's typically the most active user. I'll just type in admin. And then I can look at all the activities that the admin has done. So here you can see, created an upload a layer, created a map. And it's still just another good way for your users to go through and track what's being done. But it's also just a way to kind of, I don't want to say audit, but keep a good idea of what users are doing in your GeoNode. So that's about it for a basic GeoNode that you get out of the box. Of course, there's some more functionality in there, but that's kind of the major things that you should take away from this. So again, GeoNode enables your users to publish Raster Vector tabular data. You can manage metadata and associated documents all from within this high level web framework. You can search spatial data and spatially search your data so you can filter by the location of your layers or the location of your maps. You can create and collaborate on multi-layer maps with other users. And you can rate all the first class objects as well and add comments to those data sets. Another important aspect of GeoNode is what it allows developers to do. And this is to easily brand and theme the application using just your CSS or we compile CSS from less files. You can override templates to include custom functionality. Again, this is really the power of Django. But GeoNode does a really good job of kind of compartmentalizing different functionality. So you can just extend certain pieces of the application instead of having to overwrite the entire thing. So if you wanted to add a small piece of functionality into the layer detail page, you don't have to recreate everything. You just add that additional code into what's called the template. Another powerful aspect of this is that you have access to functionality from a large ecosystem of pluggable apps and modules. So because it's written in Python, you essentially have access to any Python library that's out there. You can import it into Django and then use it in your GeoNode application. So for things like QGIS or if you want to do some damage assessments using the In a Safe library or even ArcPy or something like that, you could enable that functionality and expose it all through a Django web application. And this is more for our current master branches that we also allow developers to access to GeoNode objects from third-party applications via the API. The two-point overly still had an API, but it wasn't nearly as formalized as it is now. It's much better than the current master branch. Again, so you can develop mobile applications or access these objects through additional external applications. Next, security. All first-class objects have user-enroll-based security. And probably the best thing about this is that GeoServer delegates both authentication and authorization functionality to GeoNode. So as soon as you modify the permission in GeoNode, GeoServer will automatically respect that. So say, for example, you have a layer that you want only one explicit user to be able to see. As soon as you set those permissions, GeoServer will automatically filter that layer out of the Git capabilities for all other users. And then Gitmat request from other users will get a 404. So instead of actually returning the image. So GeoServer delegates all of that to GeoNode. And you have one spot that you have to update those permissions. So it's a really nice ecosystem kind of. And it's definitely very secure. And finally, like everything else, in Django, the security is completely extensible. So you can tap into third-party libraries like Django LDAP to enable LDAP application or authentication. And then you can use Django's remote user authentication for single sign-on. And they're very high-level libraries. So you can even implement your own authentication. It's fairly trivial. And here's just a screenshot of the permissions workflow. From the Layer Detail page, you just click on the Edit Permissions. And here, I don't know if you can see this or not. But here, I'm explicitly giving certain users access to that layer. So next, this is probably important for anyone who may be thinking about deploying a GeoNode, especially in another country that doesn't use English as the primary language. English is the source language for Django, or excuse me, for GeoNode. And it's at 100%, obviously. But next is Korean at 36%. And it just goes down from there. It seems like only Japanese and Spanish are the only other two languages that are localized above 30%. So it's something that we can use a lot of help with on the GeoNode project. We use a tool called TransFX to do all the localization. So we essentially load all of our translatable strings into this application. Users can use this high-level interface to go through and translate them for us. And then we can commit them back into the project and compile them into our localization files. So something that if you have any skill sets in these languages, we'd be appreciative to get some of these numbers up a little bit higher. So next, I just want to talk about some of the technology that GeoNode is built on. Again, we use Django as the web server. GDAL is kind of a low-level geospatial library. And we actually access it through higher-level Python methods. GeoServer, we use to serve up the data once you upload it into GeoNode. Postgres and postgis, we use for the actual GeoNode database. But you can also upload your data directly into a Postgis data store. And that's versus serving it up as a shapefile, which usually isn't recommended for production uses. So as soon as your user's uploaded, you have a Postgis table, GeoServer is serving it up as a WMSWFS, a fairly straightforward workflow. We also use GeoWebCache that's embedded with GeoServer to reduce the number of actual GitMap requests or the processing time for GitMap requests for WMS calls. PyCSW is a built-in CSW functionality in GeoNode. And this next tier is kind of our static file tier, some of the things we use to create the client side, but also to manage the static files on the client side. So the first tier is grunt. We use that for typically operations or tasks that we do like minifying CSS files or minifying JavaScript files stuff like that. We use leaflet for the layer detail page in the current master. The one that I was doing on the demo, the two-point release, still uses open layers. Even in master, we use open layers for our mapping client still. Just because it has all that additional functionality that I was showing in there. So we haven't found a good leaflet or really otherwise mapping library that has all the functionality of our current mapping application GeoExplorer. Next is jQuery, obviously, just for DOM manipulation and stuff like that on the client side. We use Bower to manage all of our client side dependencies. Again, trans effects for our translations and then Travis CI for our continuous integration and building. So I did want to mention, I think it's worth mentioning, why we chose Django. And I wasn't involved in the design process, so this is kind of my best guess at why Django was chosen. First of all, it's because it's a world-class Geospatial web framework. Django, by default, has access to GDAL and Geo's libraries. And so any query you run in Django, if it's on Geospatial data, you automatically have a Geospatial result from it. So when you get a collection of records back, you can get, say, the extent or intersect it with some additional Geospatial information or whatnot. So it's a very high-level Geospatial web framework, but you can quickly develop very powerful Geospatial applications as well. And the cool thing about it is the API, by using these third-party libraries like TastyPy, you get all that functionality out of the box. So if you make a model with Geospatial support, the API already understands that and knows how to do intersections and things like that. So you can build very quick powerful applications with it. The second reason is the Geospatial community already has significant experience with Python itself. Libraries like QGIS, ArcPy, Shapely, the list goes on and on in Python modules out there, Python ports that are already written in Python and that the Geospatial community is already using. Next, Django has a lot of batteries included, right? So things like SQL injection or cross-site request forgery, these types of things are pretty far distracted from the user experience or the developer's experience now. Django also includes a really nice admin section where you can inspect your database, which is a little bit different from most other web frameworks that I've seen, specifically Rails. And what I'm not sure of node includes one out of the box either. Next is extensibility. Like I mentioned before, virtually everything in Django is extensible. And it's extensible at a very high level, right? So then again, another example is the authentication libraries or virtually anything, all the templates, the client-side code and everything else. And finally, it's because I think we're in good company. We have discussed Instagram, Mozilla. These are some major organizations that are deploying web applications using the exact same technology. So that makes you feel a little bit better about working with it. So Junote is built by 33 committers. And at the time I made this, we had about 75 contributors. Here they are, left to right top to bottom, based on the number of contributions we have from them. It's pretty much exclusively built on GitHub. We have some lists and stuff like that that we used to communicate with one another. But everything you would need from the project is on GitHub. We have over 240 Star Gazers, over 200 Forts, 200 issues, over 187 releases on GitHub, and over 8,000 commits on there as well. So anything you need, if you're interested in Junote or want to go through and see what the current release cycle is or how many issues we have, or even file an issue, the link at the bottom, just github.com. So who uses Junote? So this is a non-exhaustive list of some of the organizations that I know about that use Junote. We have GFDRR, which is the global facility for disaster, reduction, and recovery. Army's US Spatial Center, Harvard State Department, Map Story, Open San Diego, Copeco, which is like the FEMA in Honduras, Pacific Disaster Center. This one is Dominican Republic's FEMA equivalent. NOAA, Argonne National Labs, World Food Program, World Bank, Ithaca. And the list goes on and on. You can just Google it, and you'll find different instances of Junotes out there. So I want to give a couple screenshots of some of the more successful Junote deployments that I know about. This is the World Food Programs. Over 370 layers, you can find it at junote.wfp.org. And as you can see, so this is using the same demo site or the same baseline as the demo site that I showed you earlier, but it looks completely different because they've done a really good job of styling it and customizing it to their needs. And so they added some additional client-side functionality, but overall made it look really nice and really slick. Next is Map Story. I think this is one of the ones that when you look at it, you really can't believe it's built on Junote when you first see it. But it is. It has over 1,200 layers. And really, it's claimed to fame as its geospatial and temporal framework. So it allows users to tell stories using geospatial data, right? So you can upload a layer and then walk through time on that layer and show or tell a story with it. So you can show how if you have city streets and a time element on all those streets, you can actually draw them over time and show how the city has expanded or something like that. But they have a lot of really great what they call map stories. And again, it's all built on Junote. Next is the Harvard World Map. And so this one, this is probably the largest Junote deployment that I know about, over 11,000 layers, over 3,900 maps. And then you can find this one at worldmap.harvard.edu. Next is the Nipah None. This is a pilot project for the Department of Energy, done by the Argonne National Lab, over 260 layers. And what I really like about this one is how well the metadata is filled out. So you can see when a deployment has a lot of really well-maintained data, it looks really good. And it's really easy to find geospatial information on there. Some UCs just don't have any of this stuff at all. Finally, GeoShape is an implementation that I worked on on the Rogue project. Our claim to fame is kind of our GeoGit integration in GeoNode. We also use a custom map application. So we replaced the GeoExplorer client side map with a map that abstracts GeoGit operations at a really high level. So when users start interacting with data, they are actually making commits in a GeoGit repository. And then from a high level, they can go back and actually reset the data, revert previous changes. They can synchronize their GeoGit repository with other servers. And then we also have a mobile interface that allows users to do the same thing from an Android device. So next, just wanted to, if you need help with GeoNode, there's GeoNode on FreeNode. We also have two mailing lists, GeoNode users and GeoNode dev. One's for developers. The dev one's for developers. The other one's the users list. GitHub issues is really the place to go if you have an issue or want to track down something that you think may be wrong with it, but don't know. There's a chance that we already know exactly about it. And we're working through it. And docs.geoNode.org is our documentation. And finally, I just wanted to list some instructions on how you get GeoNode. If you're using Ubuntu 12.04, it's just sudo appgit at the GeoNode PPA and then appgit GeoNode essentially. And for all other operating systems, just go to our GitHub page. We have some pretty detailed instructions on the installation of the GeoNode. And we have some pretty detailed instructions on the installation instructions for each of those operating systems. And I think that's the end of my slides. Are there any questions? Sir? I know that for the Rope project, you created a vapor script. Chef scripts, are you going to be back 40 goes over to the Ford and Chilin projects? I've been thinking about it. It would be helpful. I know, Simone, do you guys have some scripts also for deployment for deploying? Yeah, I think there's also some fabric scripts out there for deploying. I'm not sure if any of them have been committed into a repository in the GeoNode organization yet. But I think making the Chef scripts is something I'd really love to do for GeoNode. And I'm hoping I get some time to do it. We'll see. I mean, it would take an hour to write a Chef script that did this at most. So I'd like to do it if I could. Any other questions? Yes, sir? Is there support built in for, I guess, grouping permissions by user groups? Yeah, so in our current master version, we have role-based security as well. So you can group members into a group and then say, like, these are all your analysts or developers or something like that, which is obviously much easier than having to go through and explicitly specify a user, all the users who may have access to a single layer. But yes? But you can also do it on individual user basis? Yeah, either one. So the logic will walk through all of them, ensure that the users have permission either from a group or explicitly, and then continue on the request cycle after that. Any other questions? OK, thank you very much for coming.
|
GeoNode is a web-based extendable platform for the management and publication of geospatial data. It brings together mature and stable open-source software projects under a consistent and easy-to-use interface allowing users, with little training, to quickly and easily share data and create interactive maps. This talk will be a high-level overview, suitable for new users, of the application's functionality including examples of the creative ways organizations are using GeoNode. This talk will also cover new functionality being added in the 2.1 release and the status of that release.
|
10.5446/31732 (DOI)
|
Ie, let's get started, it's now 10. So, today I'm going to be talking about connected cars with PouchDB, and it's so great to see some of you with PouchDB former committers and committers here in the audience, it's great. As well as PouchDB, as well as Cashbase, as well as Cloud, so let's get started. So, what we're going to talk about today, we're often going to talk about connected cars, it's in the title. But we're going to talk about Ford, OpenXC, and we're going to go into some details of what that actually is, and also what is the Chrome app, because the Chrome app was kind of essential for doing our demo that we did for Ford. None of you is possible without PouchDB, which is an offline JSON database, and I'm going to go into some detail about that. And we're also going to talk about Cloudant, how its advanced GSM API is very useful in this example. And I'm going to go into some detail about how we put it all together to create a connected car app. And if anyone can't hear me or have problems with my accent, please ask me to repeat myself, I don't mind. Finally, we're going to finish this all up with a demo, and actually have a live demo of the Chrome app running. And if you stay to the end, I'll tell you about prizes that we're giving away. Cars? I wish. So, connected cars, I mean, the thing is, who has a connected car? Well, it's Audi, BMW, Fallen, Nexus, that's the obvious ones. There's also April Renault and Volvo, a few more European. And there's also Tesla, Daimler, and many, many others. This is a hugely emerging market right now. What is a connected car? So, vendors are no longer lying on an in-board display. You know, mobile phones are updating more frequently than cars update. So, changing your display in the car, you just can't do it if it screws off to go and undo the wire as well as connecting your phone, you can just do it. So, you can have your latest Android, latest iPhone, that can be reading data from your car. Connected to a mobile phone or a tablet, it's not always a phone. If you're in a routing company, for example, like UPS or FedEx, you're probably going to have a tablet rather than a phone. So, you want to really connect your tablet to the car or to the van. Here's the key thing, here's the key thing that's driving it because these vehicle vendors are seeing opportunities to build engagement apps that stand beyond the car. They want to be able to read and tell you about the data from your car, take it back home, and assess your driving. How can I change my driving to go increase my fuel economy, for example? How can I change my drive to go increase my tyre wear? Are those type of things that are of interest or if you're a particularly bad driver, of course the insurance company is very interested in that data. Connected to a car, we'll read and tell you about the data and I'll go into some detail about what data is available. You can think of things like most cars nowadays have GPS enabled on them. You can also read your foot position on the throttle pedal. You can read when you press the brakes, when your lights are on, when your wipers are on, when the doors open, who's sitting where, you can read pretty much everything as long as it's read only. You can also broadcast alerts, so emergency alerts are the obvious ones and I'll give you a few more examples of some others. One of the key things for a developer is that they have very strict rules on distracting a driver. Pretty much if you're going to write an app for a connected car, you have to have it in locked mode. You don't actually have a visual display. So the only thing that's available to you either, very simple but impressive, you could unlock and hit or more than likely it's just a vocal commands only. So either you're directing the fame of your own voice or you're receiving vocal alerts back. So there's our collapse. This is one of the first ones that came out from a hackathon that Ford did a couple of years ago. Actually, three increases that's going up the tempo of the music, which to me is a very crazy idea because as I'm a driver, I would drive even faster and faster and faster. And also, it's going to send you traffic alerts. So I'm going to go into some detail in a minute about Ford's challenge, which was a traffic tamer in London. They want to alert people when there's going to be problems up ahead in real time, not waiting for some helicopter. I don't know how many people know about London, but they fly helicopters around. They say they're lucky for travel spots. You miss so much. But when the crowds host that data, you're going to get more accurate results. Fleet people want to monitor wear on a vehicle. If you're pressing a brake a lot, you're going to be wearing your brake pads. Taking the wheel off, checking your brake pads is a real pain. If you had a monitor that, apply some diagnostic on a cloud is very, very useful. One of the other apps that was actually demoed again about 18 months ago was when they were doing a mash-up of weather data and its car data. So when there was a local rain ahead, it would automatically turn the lights on on the car. Again, if your airbag deploys, you might be unconscious. It's just the way that they want to alert 911 or your family members to come and help you. And tracking of these cars. You can get your price down if you drive carefully, basically. This is something that the leasing companies and insurance companies are pushing. If you don't drive excessively, everyone drives a higher car that is stolen. It's not your car, so you're going to drive it really, really quick and don't really care if you mash it. Well, if you drive it carefully and your history of driving carefully, your price will probably go down. Probably wouldn't change my habits. And they also want to know about road damage. Say you're a local county and you've got to go and resurface your roads. You better track how many cars went over and what speed and you better do some aggregate there to go and decide whether the road surface is deteriorating. You're going to have to send someone out to check it. Connect to car problems. Well, a limited display, I mean, we touched on that. There isn't very much you can do if you're only allowed a couple of button display or you have to be locked or it's a vehicle commands only. So that's actually going to cause some problems for app developers. They're not using Google Play or using App Store. They're actually deploying their own App Stores. So Ford's going to have their own App Store whenever they have their own App Store. The problem is how do you get on that because the actual validation process for your app is quite long and intensive. For the traffic tame where they went through that process and we had to change ours a couple of times. Virus protection. People already can't get on that. Virus protection. People are already cashing in on this right now. I don't actually understand the risk because if you're primarily read only it's not too bad. But people don't care. Some virus companies are advertising solutions now for getting their cars and making quite a lot of cash. The other problem is because of the delay on getting on the vendor app store, the actual cycle of updating your display when you get feedback from the users is going to be quite slow. I like Android because it's very quick to go and update your app on the App Store. App Store is not too bad. App Store is for cars because the necessary long validation process is going to take longer cycles of development. The joke we've been cladding is we don't really know why it's called a connected car because you're disconnected most of the time. So it's a disconnected car. You don't really have internet connection or local connection any most of your journey. So what is for an connected car? I grabbed this from their website and you can see it's totally open source and I tried to annoy a jet here. You can go and check it out afterwards. It is an open source hardware and software platform that lets you extend your vehicle with custom apps and plugable modules. The key thing there is the thing in the middle which is the OBD2 port. Then they have a little dongle there you can connect and then from that dongle we can either directly connect with USB or you can connect with Bluetooth. The fun thing is that these have been available in Ford cars for the last 10 years if not longer. You can get a Mustang from about 2000 and it has it in there. They've really been advertising as interface now for the last 18 months or two years. So they anticipated yes over 10 years ago which is quite incredible. So I encourage you to give this a go. OBD2 ports you build on yourself and cost less than 50 bucks. Great fun. So one of the things I jumped at when I saw OpenXC is that the data coming off your little device is JSON. I didn't have to do any manipulation. It's JSON out of the box. Not some crafty XML. It's just JSON. This is just an example of some of the things that come off the device. Vehicle speech for example, accelerated pedal, your GPS is there when you let them along, your fuel level, your torque. It is also things like wipers, lights, who's sitting where type of thing. It depends how many sensors the car has. You're pretty much guaranteed vehicle speed, accelerated pedal, brake and GPS either through your phone or through your GPS on the car. There's much, much more available. There are SDKs from Ford and Android and Python. But that wasn't really good enough for what we wanted to do because we wanted to deploy on an iPhone as well. So we want to be on an iPhone, Android, Windows tablets, pretty much everything. So we write an SDK on JavaScript which is an open source. You can download it from Clowns GitHub page. We call it OpenXC-JS. We're going to put it down. It's pretty complete. I didn't know what to do with the page yet. So what about CrayMap? Why do we need CrayMap? Because we wanted to be cross-platform. We wanted to run on Windows, Mac, Android, iPhone, pretty much anything. And also we wanted to deploy, but that was really easy for us at the time. That was through the CrayMap store. And our key thing is that a CrayMap would run in the background so it can be offline and online as well. It already has Bluetooth and USB connectivity for its native APIs. It has an excellent text-to-speech engine, which means that we could eliminate the need for a UI. We could actually go and alert people when something is happening. We could lock the screen that it's going to be running. And then the key thing is that it's an HTML file application that's based on the native APIs and through something like Cordeva, you can go and deploy it to a mobile phone. So PatshtyB is not going to be impossible without PatshtyB for offline storage. I grabbed this directly from the PatshtyB web page. I didn't want to do it in justice. PatshtyB is an open-source JavaScript database inspired by PattiCouch. It's assigned to run well within the browser. It's key thing is that it works offline as well as online. And then you can sync it with PatshtyB and compatible APIs. So you can sync it with PatshtyB and compatible APIs. So it's a great way to do it. It's only very brief because I wanted to show you that we use PatshtyB and then we sync it to Clown. Clown is a distributed databases of service. So it has a resale up time rule. We're always available. So you think of a high volume of cars where you need to up time to be able to sync that data. So it's a great way to do it. So it's a great way to do it. So it's a great way to do it. So it's a great way to do it. So it's a great way to do it. So it's a great way to do it. So it's a great way to do it. So you need to up time to be able to sync that data. You can't have a server that's down. We offer advanced geocatabilites and that's how we do radius search for where the traffic damage is and we'll come down to that in a minute. And then in this particular example we sync from PatshtyB up to Clown. But you could just as easily sync from PatshtyB to Clown. It depends on your availability requirements. So bringing it all together is a bit haunted to get to you quite quickly. We entered the Ford traffic tamer challenge back in March. There were about 50 or 60 people who entered. And the aim was to reduce traffic congestion in London, which is in the United Kingdom. We thought it would be a great way to go and talk to people like Ford and BMW, Audi, et cetera. So we wrote this in a couple of days and it really has opened some doors. I would keep looking at the Ford traffic challenges that come out every couple of months. You get to meet some very good people as you enter years. The reason we were able to write this in a couple of days was because of Clown and because of Pouch being so simple and because of Craymap being incredibly easy to deploy. So the complainants. We have the Craymap which is when you enter. We have a web worker. So a web worker is like a background thread for a web app. And within that web worker we're running PouchDB, so the UI in this case is a very cool alert so it's always active. And we also had another web worker because the people judging us challenged weren't going to drive our app around in a Ford car. They wanted to run it on their desktop or on their phone. So I wrote a simulator for OpenXCJS so it was reading a trace file and playing those events back as if they were happening. And we put it in the web worker. So that's our OpenXC feed simulator. And again that's open source and you can download that from OpenXCJS on Clown and Skid Hub page. And then using the Craymate of Extensions we have a text-to-speech engine. And of course we use PouchDB. So the features. The application is offline first. Everything from the car is being written offline so it's being written to Pouch. We're not assuming connectivity. That means you have very low latency access to store your data. You're not getting up to do a set of a hop, you're storing it locally on your phone. And these trace files, the inverse, laser, laser events are second up to a cut of 100. It's only a couple of gigabytes when it's inside Pouch or inside CouchDB or wherever you want to run. I mean for a whole day. So traffic alerts can be fed from a fed party. Or of course because we're monitoring people when they press their brakes or their speed. When they actually stop, we can crowd source that data with a map produced on the server side and say there's a traffic congestion here. There's a hotspot going to allow the user. It works with load bandwidth because we're not assuming connectivity. Peer-to-peer, this is the key thing, this is kind of cool. So road networks are beginning to introduce ad hoc wi-fi. So as you're going on the road you can have occasional connectivity and through that we can go and pay-to-peer other cars. We can use them as a handshake service. WebRTC is going to be something that's going to come to the fore I think for connected cars. WebRTC is a protocol for peer-to-peer through the web browser, native by Google a few years ago. Everyone's offline. And of course there's emergency response, there's always going to be a big play there. When there's a car crash you want to go and get 911 there as fast as you can. So here's the actual workflow. So even with the simulator we're running 10 to 100s of events every second and we're writing those to PouchDB. But we don't want to sync that with the server because that's too much data, too much to be writing up all the time. We're saying if I'm ready offline you might be on a really expensive phone connection, you don't want to be syncing your data. So only an actual event is going to be synced to the server. So if you stop your car for example if you're a zero and we know you're not parked then that means there's going to be a congestion that we want to allow other users in the vicinity. Every 50 seconds I say the client does a very simple query for traffic alerts in that area. Does a radius search against Cloud and Geo and any events come back. Then when the user disconnects it goes home, when they go in the garage or wherever they're going. When they're on Wi-Fi or 4G that whole data is synced. That means that the Cloud will have all your data of how you drive your car. Of course there's some permission issues and security issues there you have to be concerned about. And the data can be analyzed after the commute. So we can save things like if you start your commute 10 minutes later you know you'd actually save half an hour that day. So we can assess your driving pattern. So I thought I'd try and show you what the data looks like in PouchDB. Because we're using Chrome we get your excellent debugging tool and I'm showing the resource here within the web browser. You can see that here's PouchDB and on the right hand side you can see the geodraves and objects. In this case we're showing a set of way to pedal engine speed if you're consumed and there's hundreds and hundreds of views in there. Then if you press sync you get it stored within Cloud and it's on the Cloud. So here's a showing that a vent has been synced to trigger an alert. In this case we're triggering on vehicle speed and the value is pretty low. You're only going one to two miles per hour so that's worth seeing me less on traffic congestion coming up. So now we're going to go into a demo. So again this could run on your tablet or your phone but I'm running it here inside Chrome as a Chrome app. It's actually on the Chrome app store and if you go to apenetc-js you can find all the links in the information. I'm going to show it in debug mode. Let's make sure the volume is up. We launch the app. So this is showing a map of London. So this is in debug mode right now because I want to show you what's going on. Of course normally your screen is locked because you're driving. I'm going to choose a trace file which is running inside a web worker. So London.js in there. In a minute it will zoom across. So it's done a query against Cloud and found that there is actually a traffic alert near you. Come back, turn that into a test-to-speech engine and read it out to you on your phone. At the same time all your data has been synced into PouchDB. So I'm on Wi-Fi right now but I can sync the whole app to Cloud and by pressing the sync button just there. And I just sync to order Cloud and. Okay and I can choose my presentation. Does anyone have any questions? So you mentioned this was for the Ford OpenXC device that was connecting between a tablet and an ODBC2 connection. Did you also say you could make a similar device if your vehicle did not support that or have that piece of equipment? That's correct. There's also SDKs for BMW, Audi, Dain and the Tesla. So in order to say this, you can do a test-to-speech engine. Is the interface sending the JSON to the tablet or is it being stored on the interface on that device and then you pull it off? The port has no storage. It's just reading it in real time so it's your phone or your tablet is doing the storage. In that case we're doing it inside PouchDB. Okay cool. So you're going to be able to do a test-to-speech engine on your phone. What kind of data volumes are kind of the general guidelines or your target in terms of what the car companies are looking for and then what you can do with it? So you're going to be able to do a test-to-speech engine on your phone, and then you're going to be able to do a test-to-speech engine on that device. So that's the general guidelines. Okay, so you're going to be able to do a test-to-speech engine on your phone. Kind of the general guidelines or your target in terms of what the car companies are looking for and then what you can reasonably move around. Well, currently it can store terabytes because we're distributed across multiple nodes. And we're having 10 to 100 events every second and they're normally one line of JSON, which is what a lot of credit card is. So it's pretty high volume, which is why you need a high available distributed database on the back end. But your actual one car is not generating that much data. It's the volume of cars that's generating the data. Any other questions? Well, thank you very much. Okay, thank you very much.
|
Connected cars are everywhere and writing apps for connected cars is an emerging space. OpenXC is an open JSON data API for Ford cars that enables developers to write custom apps for telemetry data in real time, this includes geographic location.Bandwidth and connectivity in a car is limited however and as such an app needs to be written to be offline first. This talk discusses the difficulties of connected cars and how to overcome this big data problem with PouchDB, CouchDB, Leaflet and Google Chrome Applications.
|
10.5446/31734 (DOI)
|
All right, good morning. When I think about all the situations in which we all analyze spatial data, we all have very different cases, but very often, accuracy is very important. And so whether we're tracking forest fires or we're trying to set up a landing zone in our front yard for Amazon's drone delivery, or even just asking, what time zone am I in? If you get that wrong, it has pretty bad implications for the rest of your day. And so I want to talk today about a case where a particular open source tool, Lucine, has traditionally been inherently not exact always. And how we fixed it, we've contributed the fix back so that you can rest assured of accurate results. Turns out, we also get performance benefits to boot. So imagine we have a database of units of land stored in them in it, along with their shapes on the earth. In this case, this is a bunch of farm fields from across the Midwest United States. Let's call each of these units of land documents. So you've got some maybe data about it, some attributes, and the shape on the earth. And I want to issue a query. We zoom in here to a few of them. I want to issue a query, this white circle, asking, show me all the documents that intersect my query shape. And the correct match should be the darker shapes in the background here. And you would expect, we assume, that we'll always get the right things back. And we don't always. So for example, in this situation, it's possible via the internals of the spatial index and how this works that we might get this red guy back instead, or in addition. And this is really the problem that I want to talk about today. Now, I want to be upfront that this is only relevant if you're querying against polygons. If your documents are points like bus stops, whatever, it's not a problem for you. But if you're dealing with census blocks or marine recovery areas or building footprints, anything from the government where they like to split land into parcels, which are polygons, this is relevant. So today, I will talk about this problem of the fact that queries are wrong, say, why we care. And in order to explain why this happens, we will get a little bit into the guts of an index, similar to the last talk of you here. And I will talk about two solutions that we've implemented and benchmarks on one of those to show that it's actually not a bad thing to fix these false matches. And then I'll finally talk about the current status of the work and the free and open software community. But I want to get back to talking about these queries are wrong. And if you store polygon items in the scene, they've traditionally been wrong. And so who cares? I work for the Climate Corporation. We care. We use a huge variety of polygon data sets. So in this illustration, the thick black lines show the outline of a field on a grower's farm. And we might want to issue a query that says something like, show me all of the soil types under this handful of farm fields. So the soil types here are the blue shaded white shapes underneath the fields. We use at the Climate Corporation these data sets for all sorts of things. We use them to offer insurance, in the case of bad weather, to farmers. And also decision analytics to help farmers answer questions like, when should I plant my crop this year? When will the soil be workable? How much nitrogen is in my field? Do I need to apply fertilizer? Stuff like that. The examples of polygons include the farm fields in soil, like I showed here. Also counties, time zones. I showed you this picture earlier. But we actually have about 30 million farm fields across the Midwest. And the very first thing that a new user does when they visit our website is they need to pick which fields are theirs. And so the browser is issuing a polygon query, i.e. the viewport of the browser window, back to our database. And we need to show them all the fields in their vicinity on the slippy map. And so this is a spatial query. And then they'll click on the ones that are actually theirs. Or we want to say, show me all the hail from the last 24 hours over Kansas. In this case, the documents that we've indexed in the database are these hail shapes in blue. And the query might be the shape of Kansas. Or we may say, show me all the hail over a particular farm field. And we'll use that to send a grower a text message saying, you've got hail. So I got it. No. Should we use the voice? I would be probably rich in a yacht if not for my voice. So it turns out, long story short, accuracy is deeply important to us. And because any errors at the back end data layer will bubble up in compound themselves through our models, our insights, and the recommendations we provide to farmers. And so at the Climate Corporation, we noticed this in Elasticsearch and Lucine that this was happening. And so we had to find a solution. And not everyone has such a high accuracy requirement. If I'm preparing legal documents, yes. If I'm firing a missile, it's a problem. If we send a farmer a text message saying, you got hail on your field, but actually it was your neighbor's field, that's a problem. But if I'm in Portland and looking for the nearest bar, like at a bar two blocks away on my phone, I get one that's three blocks away, it's OK. But it turns out that even if you don't have the high accuracy requirement, there can be performance benefits to the solution that we've implemented. And so I'll discuss that as well. So I want to take a step back and talk about what an index is and use that to describe why this happens. An index, in bare terms, enables us to search efficiently. So if I've got documents, in this case, they're textual documents that are the workshops, a couple of workshops at Foss4G. If I'm looking for all documents about spatial temporal, I find that term in the index, the list of terms on the left. I can do that search efficiently because they're sorted and they're algorithms for this. And it will tell me that the document on the bottom in blue here contains that word. Or if I want documents about GeoServer, I can do the same process and I see that I might want to look at both documents. It's important to note though that indexes are very often approximate. So if you look closely, words like two or n are not indexed. This is typically done on purpose, but essentially, if you think of an index in a book, it helps you find what's relevant, but the index alone can't tell the whole story. So Lucene is a free and open source software package in Java that implements the type of index I just showed, although it's far more fully featured. And then to distribute that feature set for scalability, there are a couple other projects, Solar and Elastic Search, which basically expose Lucene in a distributed environment. And there is an add-on in Lucene called Lucene Spatial, which lets me index not only text, but also polygons, like I've been talking about. And Solar and Lucene also expose this functionality. We at the Climate Correlation have been using Elastic Search pretty well. And the way that the Lucene Spatial lets you index polygons is it constructs, it's called the tree-based spatial index, where it takes the entirety of the Earth from date line to date line and pull the pull. And it splits it into a coarse grid, and it'll iteratively repeat this process, splitting finer and finer grid cells to some desired precision for your query. And this is extremely analogous to map tiles, if you're familiar with that. And this example that I'm showing here is a quad tree. There are other tree-based spatial indexes, like our trees that were just mentioned, or KD trees, Geohash, all these. But it turns out that any tree index has the possibility of yielding false matches because they approximate everything, they approximate the entire world in rectangles. And to illustrate, to really drive this home, here's an example that if you imagine the white grid here is some fixed depth of this hierarchical tree, the grid there. This is an oversimplification as well, so go with me here. But it generalizes. If you imagine that this is the grid of our spatial index tree and white, and I want to index this shape in red, what I do is I look for all the cells that intersect that shape. And if we assign a unique ID to every single cell in the entire grid, those cells become the terms of the index that point to the document, which is this red shape. So we can take the general structure of the index of terms pointing to documents and use this as well. So that's great. But there is a problem because you'll notice that the cells in the shape don't exactly match up. And so if I issue this blue circle as a point query or a shape query, basically I'll go, I'll consult the index. I'll find what grid cell intersects my query. I'll look for that ID in the index. It will point to this red document, and it'll tell me it's a match, which is not true. They don't actually intersect. So this is really what happens. This is why it happens. And so now, with that out of the way, we can talk about a couple solutions. And remember that I said earlier that at least for my company, we're concerned about accuracy first and performance second. Our first solution was to put a wholly separate server between the client, whether that be a researcher or the front end of the website, and Lucene. And this server would do verification of the matches that we got back from the index. So the client issues this query, the green circle, outside the white circle, sends it to the verification server, which hangs on to it for later, but also forwards this query onto Lucene, or Elasticsearch. And that consults the tree index, which finds at the bottom here, let's say, four candidate matches, most of which are probably right. And there's a possibility that maybe there's some extras in there that don't actually intersect my query. And so these get sent back again to the verification server, which has held onto your query. And it does a brute force iterative intersection comparison with the candidate matches. So it says, do you really intersect my query? Do you eat the next doc? Do you really intersect my query? This is computationally intensive operation. But at the end, we send back to the client only the verified matches. But I think the real thing to kind of emphasize here is that there are two servers. And so this is totally accurate, works great for us. And it really made sense at the time that we implemented this maybe two years ago. Because leucine internals are kind of impenetrable, unless you really understand the code. Whereas this solution is simple, straightforward, easy to implement. We can have a brand new hire up to speed in a week, if that's what it takes. There are, however, some edge cases. And they're kind of weird and obscure, so I don't want to waste your time if we have time at QA. We can talk about it. But perhaps a little more important is the latency. Both in terms of transferring these kind of false matches back through the network and then parsing them, you have to parse it both on the verification server side, just to do the brute force verification, reserialize them since the real match matches back to the client, which again parses them. It's kind of expensive because these are all in GeoJSON, which is while a great format. You're taking a bunch of floating point numbers and serializing them as ASCII strings. And it ends up being quite bloated, especially when the shapes that we have are not nice squares and stuff, but they're like these farm fields or hail shapes or soil that really follow no nice, pretty pattern. So in distributed systems, when it is expensive to move data around like this, you move the code to the data. And this is a theme that I've been seeing throughout this conference as well, which is great. And so our second solution takes off to heart. And it's exactly the same steps as before, except we've moved it all into Lucene spatial. So the client issues the query directly to Lucene. It consults the tree index. It gets back some candidate documents, some of which may be wrong. And then it does the brute force post filtering right there in Lucene next to the index and sends back the verified matches to the client. So we have the exact same accuracy before, but since we don't have this extra server in between, we get added performance. And I want to talk a little bit about the Lucene internals of how this is implemented. We say that we use two indexing strategies in conjunction with each other. The first one is what's always been there. It's the recursive prefix tree strategy, which is kind of what I illustrated with the tiles. So you've got the spatial grid. And then we use another indexing strategy. So when you index a document and then when you query, you have to consult both document, both strategies. And this one is called serialized doc value strategy because it serializes the documents geometries themselves, not just the grid cells, but the actual shape of the polygon or multipolygon in the index itself. We use a very efficient serialization called Bohnenbytes, which we've benchmarked, at least in terms of deserialization, to be 70 times faster, 70 times faster than GOJs on parsing. And it's also quite small on disk. If you don't write Java, you can kind of space out for the next couple of slides. But I do want to really show how easy this is to implement if you are using Lucene. So the stuff in white is, old code you probably already have in your application, but you create a spatial args object, which takes, in this case, point, is the query that I'm searching for. And you take the geometric operation that I want to use. So in other words, I'm kind of asking, show me documents that intersect this point. And we consult the recursive prefix tree index. So I've got some field called geometry. I make a tree strategy, and I create a query using that tree strategy and then the query itself. To use the new serialized DV strategy for total accuracy, it's only this much. So we create the, instantiate the other strategy. And we use this combined query, what's it called? Filtered query. Essentially, it combines the two in the bottom here. And so it consults the tree query first, and then it consults the verification strategy. And this very last line is very important. And what it says, query first filter strategy. That's Lucene speak for saying, always address, always look for the tree query first. And then look at the verification. And the reason for this is, like I mentioned, the verification is actually a rather computationally intensive process, at least relative to looking in an index. And if we were to have mistakenly done that strategy first, we would essentially be like brute force matching every single document in the index, which you don't want to do. So this is how you get around that. OK. Job is over. I wanted to, yeah, wake up. I wanted to test the performance of this new strategy to make sure that any of this extra computational hit that I've been talking about on the server side is acceptable. And so I set up 20 different Lucene and Xs, 10 of which use only the prefix tree strategy. In other words, the old fashioned way. And then 10 of them use both combined. And the 10 are actually at 10 different tree levels of the depth, which is the precision that I glossed over earlier. And for each of these, I measure the, essentially how slow is a query for a single point query. And I've done this from a different server, but both within the state of Virginia, traveling at the speed of light, talking to each other. And the reason that I did that is because I mentioned that one of the causes of latency was network IO. And then I also looked at the index size. To illustrate what I was, this may be hard to, this may do weird things to your eyes, I'm sorry. But essentially, it's about 14,000 documents that are a four kilometer by four kilometer squares they don't overlap each other, covering the state of Oregon. And I don't know how visible it is. There is somewhere in there, a single orange point that is my query. And if you think about it, since it's a point query and the documents don't overlap, the correct response has one document. I want one thing back. This is kind of like geocoding in a way. What state am I in? Something like that. And also just to illustrate, these are the prefix tree grid that I used. It's a geohash grid. And at level one, the grid is actually much coarser than this image, so I don't show it. But you can see it get finer and finer as you increase resolution of the index tree. And by level five, a single grid cell is about the same size as a document. Now they're not properly aligned, so a single grid cell might still intersect three or four documents. And then if you get smaller still, you actually have many grid cells within a single document. So results. Looking at the latency, the pink line here is the tree query alone. And you can see that for the first three tree levels, or a very coarse grid, there is a huge latency on the client side. And this is exactly in line. I don't show here. It's exactly in line with the number of false positives I get back. So the latency of pulling those documents off of disk on the server, sending them to the client. I'm not even including parsing time here. It is quite significant. Whereas in the blue line, it uses both strategies in conjunction. It does the filtering on the server. It sends back no false positives. And we have far faster query response. You'll notice still that for tree levels like 1 through 3, the blue line is slower than for lower tree levels. And this is because it is brute force matching every single, all these false positives. In fact, we look back here. Like at tree level 2, probably 80% of my documents are still being brute force scanned. Also these times on the left in milliseconds, this is for 100 consecutive queries not in parallel. So it's not that a single query takes 25 milliseconds. Anyway, looking at this diagram, you may think that, well, I can just index everything to tree level 4. Forget about the serialized DV strategy index. I don't need it. But in fact, all the way up to tree level 7, I'm still getting false positives back from the tree only. So there are a lot fewer. It's maybe only five false positives. But I'm still getting back inaccurate results all the way to tree level 7, which our second strategy gets rid of. And so this is kind of one good note. Reducing the false matches using the serialized DV strategy costs you nothing to eliminate. And I mentioned that you don't get exact matches till depth 7. But at depth 7, if you use the tree alone, the index size starts to explode. Actually, in either case, the index size on disk starts to explode. Because for a single document, I might be indexing 30 cells. So this is a huge blow. And the indexing, we don't really want to do that either. So in fact, what we'd kind of like to do is choose a tree level like 3 or 4, but use the serialized DV strategy. So we get low latency, and we get a small index size. And this is another important conclusion, that we can keep a smaller index on disk and still get fully accurate results. In other words, more accurate results and faster responses with a smaller index. So this also shows one other thing. A concern that we'd have is that since we're now also serializing every single document next to the index, every single geometry, that we'd blow in our index just from that alone. But that size is constant. Obviously, it depends on the size of the documents you have. But in my example, this is 1 megabyte because of the efficient well-known bytes compression. And it's constant across all three levels. So it's not a big hit at all. In the free and open source software world, we have contributed this back to Lucene Spatial. It is released in version 4.7, and I think we're now at 4.10. And there are open tickets in Elastic Search and Solar. This should actually be quite easy to expose in them. It's not done yet. But if you are interested, here's where you look. And I'm also happy to discuss. I imagine this is something we'll do, at least in the Elastic Search world. But the question is when. So I'm happy to talk with you afterwards. So to conclude, to wrap up, spatial indexes are typically approximate. Other databases may have this problem, too, although they may often address it in a similar way. We started out achieving accuracy by brute force matching all the candidates. And then we also increased performance on that same solution by moving the computation to the data. This is easy to do, it costs you nothing. And yeah, that's it. I do want to give my thanks to David Smiley, who's here in the front row. He did most of the Lucene side work. And to Lyndon Wright, my colleague, who helped to produce some of the diagrams in these slides. So thank you so much for coming out today. I hope this is useful to you. Have a take any questions? Yes, sir? Yeah, a bit of a broader question in terms of, obviously, what other functions or spatial operations that are you guys looking at doing an elastic search that already exist in, say, a post-gist database, database, topological comparisons, et cetera, et cetera. Kind of what's next on the menu of robust spatial operations and elastic search on your wish list for your company? Joins? I don't think it's likely. We have evaluated post-gist at length. I mean, elastic search provides pretty much almost all of the functions that we need. We don't really care about topology for your example. Post-gist would be awesome because we could do joins across data sets, which we can kind of hack out in elastic search, but it's not as efficient. It's a bit of work, but it's OK. The reason that we don't use post-gist, because again, it could also offer the exact same features plus more, is scalability. For a lot of data sets, it'd be all right. For a lot of our data sets, it won't. We've tried pushing it. You can scale up. You can't scale out. Now, of course, we could shard our data sets, scale out that way. But then we've lost the one thing that post-gist buys as it's in those joins. They're both awesome software, though. So I mean, yeah. So question is, did you at some point make this decision between solar and your other product, the other elastic search? And how did you make that decision? We did. And it's been a microphone. We did. And it's been so long ago. I don't even remember what into that. We've been using elastic search for two years. Sorry. Which probably implies that there may not be a great reason, but you have to pick one. Yeah, I don't know. Do you have any observations on the kind of spatial data that you're indexing and the size of the index itself? Like the coastline data versus just simple shape? Yeah. So one of the kind of pain points for some of the polygons that we index is, well, OK. To be clear, we've got a lot of documents. The fields I showed you, that's 30 million of them. And I want to say the index is somewhere around, I don't even remember. It's quite large. And this is only one of many. Like the hail that I showed you, those don't change over time. The hail that I showed you changes, we get updates every day. One thing that is screwy with some of our data is even if the data set itself is not huge, or if it is huge, we get really weird shapes, because we're really tracking things in nature. Like rivers, not just lines, but maybe soil under a river is this really narrow but squiggly polygon. And so a single shape might have hundreds of thousands of vertices, which we can simplify at some loss of resolution. Essentially, it's not a problem to the index. It's the same number of grid cells, but it's a problem to anything we do with it. And we've done things like taken, if you have a really long shape, that'll span a lot of grid cells. Or if I want to do a query for this really long shape based on this little query here, I'm only concerned with the part that's under my query, but I'm going to get back all these vertices I don't care about. And so we've tried things like splitting up, partitioning the shapes into multiple documents with some way to tell that they're really the same thing. How did you tackle the problem of updates to your data set and keeping your indexes in sync? That's a great question. Most of our updates, at least in what we do in Elasticsearch, are batch updates. It's data we get from governments or universities, and we get it once a month, once a year, once a day sometimes. But not a lot of online writes. Hi. The Java code that you showed up there, was that in Elasticsearch, or was that like a separate piece of code sitting outside of it and calling things? This Java code is, though it's not implemented in Elasticsearch yet, but you would write something that looks a lot like this in Elasticsearch. It's actually an Elasticsearch core. Well, not yet. But yeah, so this is code that I wrote just to do the benchmarking where I was not going through Elasticsearch just for simplicity. I was just calling Lucene directly. But this is the code as a client to Lucene you write. So Elasticsearch and Solr are clients to Lucene. So if you use Elasticsearch and we get this implemented in Elasticsearch, you will actually never even have to touch this. There will probably be some little fuzzy match false or something in your query envelope. That's it. Great. So you made a call by default in the back of the order. I mean, who knows? I would guess that for backwards compatibility, which is fuzzy, you wouldn't, because it would make an upgrade have to reindex everything, because you'd have to be storing these serializations. So I would imagine it'd be most logical to turn it off by default. Also, not everyone cares about it. So why take the hit? Great. Thanks so much for coming in. Thank you.
|
Lucene, and the NoSQL stores that leverage it, support storage and searching of polygonal records. However the spatial index implementation traditionally has returned false matches to spatial queries.We have contributed a new spatial indexing strategy to Lucene Spatial that returns fully accurate results (i.e. exact matches only).Better still, this new spatial search strategy often enables keeping a smaller index and and faster retrieval of results.I will illustrate why false matches happen -- this requires a high-level walkthrough of spatial index trees -- and real world cases where it makes a difference.Our initial workaround was to query Elasticsearch through a separate server layer that post-filters Elasticsearch results against the query shape, removing the false matches.We've now built a similar approach into Lucene Spatial itself. By virtue of living inside, this new solution can take advantage of numerous efficiencies:1. it filters away false matches before fetching their document contents;2. it uses a binary serialization that is far faster than the GeoJSON we used before;3. it optimizes the tradeoff between work done in the index tree vs. post-filtering, often resulting in a smaller index and faster querying. I will provide benchmark numbers.I'll illustrate how developers and database administrators can use this improvement in their own databases (it's easy!).
|
10.5446/31737 (DOI)
|
All right, I think it's about 10 o'clock, so let's get the show on the road. My name is Peter Hansen. I work for the Geographical Information Center at California State University, Chico. I have a presentation called Geotools Geoserver GeoGig, a case study of use and utility field work. It was called GeoGit, now it's called GeoGig. Kind of same difference, just a new name. At the GIC, we have kind of a dual mission to provide, it's both academic and service oriented, and we provide opportunities for the students at Chico State to work in a GIS professional office and get experience that way. And we also provide GIS services to a variety of state agencies in terms of data development and services, as well as folks in our local region, so some cities and municipalities. And one of those clients that we work with is the Butte County Mosquito and Vector Control District. For those of you who aren't familiar with the duties of mosquito and vector control, their intention is to maintain and suppress mosquitoes and other vector nuisances in various ways, which I'm not incredibly familiar with because that's a whole different industry in science. The point here is that they have field crews that go out and they collect data, and they need a way to collect data and view data and edit that data in a better fashion than kind of what they were doing before, which was something that I'm sure as GIS people you've all come across where they're doing it in a paper setting. So additionally, that data that they collected needed to be versionable and they needed to have a history. So when they went to a source point of big pool of mosquitoes, they wanted to see what was going on there in the years prior. So it was important that they could see back to what had been done before and what the history of that site was. So we had our initial application that we built was on Esri and it worked and it works and they use it. But it obviously came at a cost. And that cost made it difficult to bring this sort of program to other districts in any industry really. So while that was always handy and worked out, it just comes at a cost. So we had an intention to find an alternate solution that provided the same functionality of the product that we had initially built for them. So we climbed a ladder to this guy and we looked at open source options. And this is kind of preaching to the choir, but obviously there's a lower cost of entry for open source. There is an initial time and therefore money commitment to finding stuff out, kind of a discovery process. But we knew that the tools that open source provided were commensurate and in some ways could exceed what the Esri stack provided for us. So there were additional benefits that were kind of for our shop. Since we were using the Esri ArcJS Mobile SDK, we were already extending that with some custom programming. So we were already making that time commitment to kind of creating a custom deal for our client. It allowed us to have a tool now that we could bring to other places without that cost. And in more of an open source sense, it allowed us to be involved in the community, to explore, to share what our findings were, to get involved, come to places like this and hear what people have to say and present our findings. So the proposed stack that we went with was Postgres, PostGIS extension with GeoServer, the GeoTools application on top of that and using GeoGig for kind of the versioning and the history. Since we already had that existing application basically pulled out, that data in shape files dumped it into the database connected to GeoServer. GeoServer allowed us to feed kind of base data which acted as the base map for the GeoTools application. Those three projects are all very mature, very stable. I don't think I need to go into a ton of detail on how those three components came up. And definitely on the GeoTools one because I was in the Java developer that did that. So that fell out. There's a lot of documentation on GeoTools development. So what we created was an application that could read all the same data that we were seeing before. They could query it. They could manipulate it. Zoom in, zoom out, pan, custom forms, all the things that they were able to do before. So where we're at is kind of right there, we have the application that's kind of doing that. We're stuck on that versioning part. That has proved to be more difficult than we anticipated. So when we wrote this abstract several months ago, it was like, yeah, we'll have that versioning bit done by then. But it's still kind of a thorn in the side. So some things worked and some things did not work. So in kind of typical open source fashion, you go back and look at what other tools are out there. So the database loading that works, feeding out data, that works. The application, like I mentioned, you can do all those things. But what I've seen here in the past couple days even is questioning that use of GeoGig whether in this particular instance, if it is providing almost too much complexity for the application that we need it for. So in an interim, even before we got to this conference here, we were kind of doing this bastardized version of various states of shapefile. Our client was not a complex enough use case to where we really needed to go into this real deep versioned editing. There's only 15 or so guys in the field. They were never overlapping areas. So the check and check out process was not as complex as it needed to be. But that's not always going to be the case for other utility agencies or the like. So I've seen now several uses that talks have gone through the past couple days of people using the GitHub IOPages with the GeoJSON support. That seems like a real viable option. There may be a lack of understanding on my part about why that GeoGig platform needs to exist when the GitHub platform is kind of doing the same thing kind of all on its own. If anybody has that knowledge, I'd love to talk to you guys about it. But given that this kind of newer solution is, you know, we haven't really worked with it, I have an idea of how it's going to work in my head. And I'll, you know, work on that when I get back to the office. But if you guys are looking back on videos, check out Landon Reed's discussion. He's a guy that works for the Atlanta Regional Commission. And they were doing, using this for versioning and kind of in a way kind of like a crowdsourcing of data editing for roads in the Atlanta region. So that's a real complex data set. A lot of features, a lot of attributes, and allowed for people to pull and push data and manipulate it and let it be reconciled by, you know, the administrators. So in this case, where we would use this is probably getting that Geo, pulling that GeoJSON down, pushing it into GeoTools, manipulating it there and then pushing it back up for it to be reconciled with kind of the master database. So there's future work to be done on the app, obviously. And one of those things is to incorporate that technique and see how it goes and report back to the community and what we found out. We'd also like to improve the UI and the SLDs. I don't know how many folks here have worked with SLDs, but they are kind of a slog and they're a bit difficult to work with. So improving that cartography is something that's very important for the field crew. You can incorporate a GPS dongle onto the field laptops that they're using so they can, you know, utilize that when they're out in the field rather than having to pick a point on the map. And then, of course, we'd need to test it and debug it and test it and debug it and get that feedback and really flesh out that product for them. And for us as a shop and being involved in the open source community, you know, we have a responsibility to become more involved. And I've been kind of dabbling in various facets of open source GIS for several years. And coming to this conference, I mean, you see a lot of the names you recognize, but then there's a whole host of other people that are doing amazing things. And they're putting things out there. So we have a need to go and discover more. And there's a ton of stuff on Twitter and there's a ton of blogs. Obviously, a lot of stuff is being shared on GitHub. So I've met also a lot of people that are not really, they're really noobs in this kind of open source GIS. So there's a ton of stuff out there. So go and find it. Go and look at it. And then contribute more. Put the stuff out there that we're finding, even if it doesn't work. So GeoGig didn't work like we wanted to. Maybe this can help some of the developers to streamline their efforts and getting more folks like us to be able to use it. So I would do have to mention that. What kind of held us back, I feel like, was the integration into GeoServer, the documentation on that was not as much as we needed. And then a thing that I've seen stressed on a lot this week is the need for additional collaboration locally and on the web. So getting involved with folks in our community that are not just GIS people, but in different industries that have interest in spatial and kind of really maximizing everybody's efforts and interests into something that's going to work for everybody. So yeah, that's what I've got. So if there's any questions, we'd be glad to take them. Hello. I was talking to somebody regarding the use of triggers as a way to provide version support for multiple editors. Have you looked at that at all? No, because for this case, the lack of overlap, we didn't need that level of complexity, but look into that. One of the improvements you wanted to make was around SLDs and kind of the difficulty in working with them. Do you have any ideas or plans as to how to improve that? Well, yeah, it's a better editor. The composer that was an open GioSuite was nice. I believe there's a QGIS plug-in. One of the most successful ones I was able to use was in Udig. Actually, you could export out the SLD or export out the XML and create an SLD. But yeah, there was a lot of stuff that was case-sensitive and it would just throw errors and GeoTools. It was just trial and error. You just look at the log and see what was making it break and go back and fix it. But there was a company that made, I believe it was a German company, you could make your SLDs and ARC and spit them out. That worked kind of. But once you got into any sort of regreel deep symbology, like kind of thematic stuff, it would just kind of bit the dust. There's a lot of stuff happening with CSS, cartography with CSS. I feel like that's where a lot of the effort is at. I'm curious what will happen with SLDs and their use. I think they're still widely used. Yeah, actually, I was just going to comment on that. There is a CSS plug-in to GeoServer. I don't know if you've looked at that, but we use that and like you say for thematic stuff, it's way, way nicer. It does generate the SLD behind the scenes, so we would definitely recommend looking at that. What's that called? It's just a whatever, CSS plug-in to GeoServer. It's easy to install and definitely a lot nicer. And then just one other thing about using GitHub and why GeoGig, I'm not an expert on either of them, but I kind of get the impression that the GitHub stuff is for relatively small files. Yeah. Maybe a few thousand records or something. We deal with much larger datasets and I think GeoGig is maybe aiming to work with larger datasets. Yeah, and that's something that wasn't, I think, a lot of the time. That wasn't addressed fully in either of those talks that I'd seen, actually. Yeah, it will work for the smaller stuff. And I think for the larger ones, they were kind of breaking them down into kind of more regional deals. But yeah, and I think that's kind of a limitation for GeoJ, so anyway it's kind of got, it's slim but it's still got a pretty decent overhead. But for simpler cases, yeah, I think it's just like in Vladimir's talk about simplifying, you know, if you don't need to go with, I'm hesitant to call it bloat, but sometimes you don't need the solution that does everything. You need the one that does what you're working on. Thank you. Thank you.
|
After creating a custom application featuring ArcSDE and the ESRI Mobile SDK for use by the field crew from a local Mosquito and Vector Abatement District, we sought an alternative to the high overhead from the proprietary software. By utilizing GeoTools, GeoServer, and GeoGit, we were able to develop a full-fledged application maintaining the same functionality and usability of the original application, without the high cost of entry.The GeoTools application, "Mosquito," and GeoServer, were placed on each of the field laptops of the twenty-member crew, serving both the application and cached base layers to allow for offline data connection. A USB Bluetooth GPS dongle was used to allow workers to locate themselves within the application. GeoGit was utilized to allow the disparate field workers to merge and synchronize data to the master database at the end of their shift.
|
10.5446/31744 (DOI)
|
to one of his problems. I can't remember what it was. It was a little lady, me myself. Oh yeah, yeah, that's right. He chases shiny objects. So my problem is actually similar. I sometimes chase objects that don't yet exist for reasons I don't know, sort of, you know, solutions looking for a problem. And at the prep for this, I just searched for a bunch of cat photos because that's what the internet is for. And this sometimes makes me do silly things. I didn't do this, by the way. So bear with me. I'm going to talk about a solution to a problem that didn't exist until recently and that's open drone map. Going back in time, I apologize, I can't actually see my screen here. So going back in time, back in time, there we go. So back in 2004, I was working on what I call my non-PhD in forest ecology, my non-PhD because I did three field seasons and then bailed out. But some of the fun stuff I got to do while I was there was I got to fly a big balloon. Actually, that's almost a scale right there. I think it was an 11 foot balloon. I don't remember now. I think it was an 11 foot balloon. It had a lot of lift. Which now that I've read the FAA restrictions on balloon, tethered balloon. Anyway, don't buy an 11 foot balloon, okay? Six feet. That's as big as you need unless you want to get the FAA involved. But that never happened. So I flew a large balloon with an array of cameras and I took a bunch of images. The array had some modified cameras. There was a nice Kodak DCS. Oh, what was that? It was a beautiful camera. It was a pro sports camera. So it had amazing ISO levels. And unlike now, it wasn't some filtering process which sort of smoothed and removed the pixels artificially. No, they had the H pixel had no extra electronics associated with it. So the whole pixel was the actual capacitance cell. So all the light hitting that, and this is a full frame, is a full frame CCD. All the light hitting that got back. And so here it is. I think that was made, it was manufactured in 2001 or 2002. It was able to do ISOs of 4,800 and 9,600. So really, really fast camera for the time. Also, I bought it a few years after it came out so I didn't have to spend $14,000 on it. So then put some filters in front of it. So the near infrared filter, we had some band pass filters. Those band pass filters were in order to do biological research. We wanted to understand, okay, what's happening with the leaves at the leaf level? Is there light stress? So it's lots of fun stuff. Total failure of a project. Way too early. Because there was no trivial way to figure out where we were taking pictures of. And so I started looking at the computer vision and got all excited about it and then realized that I didn't either have the computers nor the code to get any of it done. So a few years ago, I saw a presentation. I think it was a keynote that Peter Baddy gave at some geo conference of some sort. And he was talking about the future of geo. Or maybe it might have been. Actually, it was probably, well, anyway, whatever it was. He was talking about the future of geo and wearable devices and always on cameras and the reconstruction of 3D space and sort of all the things that could come to be. And it started getting, it sort of was a worm in my head. It sort of reminded me of the problem that I couldn't solve before, which was how to take these arbitrary photos that come from, you know, they come from whatever. They may not have GPS associated with them and construct them into something that we can use in the geospatial world. So let me switch over to the presentation here. Open Drone Map. So the idea here is, well, first of all, the name. It's not really a drone. It's not really a map. And for those of you who really like BSD and Apache and all the relatively liberal licenses, it's open like the copy left. So Open Drone Map. It's a well-named project. So the idea is that we go from those arbitrary images through some sort of processing pipeline into point clouds, digital surface models, textured digital surface models. So this is like a full immersive 3D environment and ortho imagery. Wow, nifty stuff. And, you know, then of course you can derive 3DEMs and classic, you know, raster surface models and so then the question becomes how do we get there? And so, well, actually the first question is what is it? So it's a processing tool chain for civilian UAS imagery. Very simple. Based on three basic technologies. And actually I can't remember what the acronyms stand for. So look those up. Bundler, PMVS and CMVS. These are old, and this is old computer vision code. This is stuff that's well referenced, well solved. It also means that it outputs a bunch of text files and the data structures are a mess. So we'll have to fix that. But, you know, it's really, really cool stuff. And it's currently deployable on Ubuntu 12.04, 32-bit, don't put it on 64-bit, don't put it on 14.04. But we'll fix that, don't worry. And, yeah, it's a work in progress. So what is SNA? It's not complete, but it's actually, there were some quantum bleeps this week. Right now it's in place. It's a full 3D point cloud. Well, it's not, I'm sorry, I should say, what we're not trying to do is replicate things that MeshLab and Cloud Compare have done as far as full 3D point cloud and Mesh editing environments where you can do fancy complicated things. There's good tools for that. And MeshLab, admittedly, is a little hard to use, but there's a bazillion YouTube videos showing you exactly how to do it. So there's at least some pedagogy for getting you there. And it's not flight planning software. And it doesn't solve everything. Also, it's not yet complete. We still need to build in Mesh creation, Textured Mesh creation, and Ortho Photos and all these associated things. But that's really close. So in the short term, what it'll be is essentially a single command you run on the command line, and you pipe your images through, and you get a bunch of files in the long term. You might wrap it in a web front end. We might put an easy sort of Python wrapper in QGIS for running it, and of course, still have terminal access. And then data structuring is something that will need to be addressed, where it pushes into PostGIS or some sort of file-based data structures. So we won't get to how it works yet, but let me take you through the sequence. And this is dark, and I apologize. But the rest are not. So first, take all your photos, and you do full stereo matching. Or you do partial stereo matching. So you're basically just trying to figure out which ones are related to each other. That gives you a sparse point cloud, and you do full stereo matching, where you compare every pixel to every possible pixel and every other possible image that's related, and you get a dense point cloud, and then you mesh it. So you turn it into a surface, and then you start painting that surface with your original images. And, well, maybe that's not the ideal way to do it. Maybe we need to do some blending, and we need to do some other things to make that look great, but we can do that. That's there, that we have the technology. And that's pretty much it. Now you've got a textured surface model. Now we can take the next steps in creating secondary products, DM, digital surface model, run through some classifiers to separate out the buildings from the trees from the ground, and create DMs and such. So let's do quick demonstration, which is going to be really tricky. I really do like you guys. That's not why my back's too. Okay. So the wonders of a live demonstration. Here we go. Okay. So dense point cloud. Essentially, this point cloud is constructed from these original images. And if we wanted to sort of see what those original images looked like, there's the original image. There's the dense point cloud. This you can get now from Open Drone Map. So feed a bunch of overlapping images in, and you will get a nice dense point cloud. And much like Aaron demonstrated yesterday, or talked about yesterday, if you want to keep your coffee warm while you're running it, you will have that option. But you can also run it on Amazon and keep their server rooms warm. So those are the actual original photos. And we can sort of see, let's look at a couple of them that are adjacent. And again, we're doing feature matching between them. So this tree here is the same. Yeah, it's the same as this tree here. Extract some matching points. Figure out how they're related. And then we do fancy magic where you're doing some linear approximation of where the rays intersect. So you can reconstruct the original camera positions and the original points. Don't make me go into it because I don't actually fully understand it. I can just say that much one sentence. So okay, we've got a point cloud that's pretty nifty. Now what would be really nice, what would be really nice is a three dimensional mesh. So and this, we need to do some improvement, machine algorithms. So we're doing the trees separately from the buildings, but there's, there's our buildings there and the adjacent trees. We've got some artifacts, some holes that don't exist. Oh, and I should say this is Seneca County, Ohio. I grew up in Northwest Ohio. It's hard to show just how flat it is. But that's real. And those ditches are about somewhere between eight and 12 feet deep. So that gives you a sense of how much topography there is in the rest of the landscape as well. So maybe next time we'll do more, a less flat location. So this minus the machine can be done now. The rest can actually be done in mesh lab for the time being. And we'll work on getting that scripted into the rest and the code pulled in. But essentially we can take those photos and we just run this little texturing from registered rasters and this does our, this does our drape back. So now I'm going to do a high CPU process live in front of a studio audience. We'll see what happens. I guarantee a computer crash. So it'll be hand waving for the rest of the presentation. I do have a song prepared. Does anyone want to hear a song? What's that? Dance? Only if someone's willing to play the drums or the guitar. Okay. So it worked. Yay. I don't have to sing. You guys don't have to bear with me. So there it is textured. This is actually in order to, I cheated. This is not a full texturing. Like this is a low resolution texturing. If we did a full resolution texturing it'd be even prettier. You'd be able to see the center line of that road quite nicely in full resolution. But pretty snazzy. We've now textured it back to the model. Now the next steps will be how do we then North rectify it? How do we then geo reference it? But those are comparatively easy problems. And actually I'm going to run that just because that didn't crash. We're going to run it one more time. Alright. See if we can crash this. I really like singing. I like dancing too. Do we have any drummers in the room? Anyone with a guitar? Alright. Well hopefully it doesn't crash. This is all GPU intensive and I've got an ultra book. So alright. So now we have a little bit more detail in our model. We can zoom in and see that line a little bit better. If we did a full texturing this particular model would have an on the ground resolution of about two inches. So some interesting things you can do with that. So I worked for Metropolitan Park District and one of the things that we're going to do with this, so there's two basic projects we're going to do with this. One is we're going to fly a number of natural areas including the wetland, the south end of our district and identify invasive plant extents for our strike team and monitor that over time. We've never really had a good way to sort of really monitor the extents. We also haven't had a good way to identify where those tiny little patches are that didn't quite die off. You send a strike team out, they spray a whole bunch of poison, stuff dies, you feel pretty good, you move on to the next place where you're going to spray poison and one little plant just survives and you know the invasive plants they take over and it's a mess and of course you just killed everything so you know it's a perfect opportunity for that invasive plant to come back. So that's one application we'll do with wetland and then the other thing we're going to do because we can get one inch imagery in color and Fred, we will be doing identification of native communities. So in this large wetland complex it's very difficult to access, very difficult to walk through, very difficult to map. We'll have a sense of where some of those really interesting communities are and potentially find some of the rare things that we know we should be good stewards of and be paying attention to and be spraying the heck out of all the invasive plants that are anywhere nearby. So those are, that's sort of the natural resource application. The other application which is not as interesting to me from the kind of project it is but more interesting to me from sort of the technical standpoint will be inventory of engineering structures that move over time, engineering structures along the lake and so we'll be able to hopefully get some really good vertical accuracy and watch for changes. We'll send our surveyors out to take individual points, we'll lock these things down in real 3D space and then we can measure movement, not just movement of points that are registered that we send our surveyors out to periodically, that's useful, we need them but then also find out what's happening in between. Okay, so the demo succeeded so no song yet. Let's say demonstration GPS photos. Okay, so there's another application here. So this is drones, open drone map. Again I said it's not open as in BSD, it's not drone as in killing people, it's not really a map. So it's almost, it's actually better named than open street map because it's even less what it sounds like it is and that's what I was aiming for. So let's go a little deeper here and do something even less like open drone map. Well, first of all, so question distribution. So how do we then distribute this, how do we share this and there's some open-ended questions here but one of the things that we want to do is push the ortho products from this into open aerial map and then there are a couple of other, the genesis of a couple of other DEM digital surface model mesh type projects trying to collect global data sets of these that are just sort of in the genesis stages and those will hopefully be the repositories for all the data that people process with this should they choose to. But what's really cool is folks like Howard Butler and, oh shoot, well okay I'm not going to get all the names but Howard and folks who are related to him are doing things like taking C++ last zip compression libraries, so LIDAR compression libraries and porting them into JavaScript by compiling, by using JavaScript as the compilation target. Crazy stuff, shouldn't be done, has been done, is wonderful. So now you can actually throw a last zipped, a point cloud basically zipped as a last object and throw it on an Amazon machine somewhere and actually view it in your browser, have it stream over as a few megabytes, a few hundred megabytes instead of the few gigabytes that it really is and then inside the browser it decompresses and you can display it with WebGL and so not only can you easily store this and give it to people but they can easily view it and use it. Okay, a quick plug and a side. I went to Phosphor G Korea a couple of weeks ago, folks there are wonderful next year, it's in Korea, you should go and you should be at least as silly as I was. It's a very, very, very beautiful place with beautiful people, a lot of energy on the geo side and a lot of fun. The reason I bring that up is while I was there I was working with something called Mapillary, oh I don't have a Mapillary thing here, Mapillary. So Mapillary is for those of you who haven't seen it, it's intended to be a global crowdsourced Google Street View product, pretty liberally licensed, it's similar licensed, well compatible licensed with OpenStreetMap and they do these wonderful things where, oh yeah there we go, so there's my little street view from Goggleman, I can't pronounce any Korean every time I tried, there was laughter, very nice laughter but nonetheless. Anyway from the tech district in Korea and so I've got this series but you'll notice actually if you look at this line, I did not have soju before I walked this, I want to point this out, I was actually perfectly sober, at least as sober as I am on an ordinary basis and I certainly had not consumed any alcohol within I think probably a week of when I did this walk and you can see it just going all over the place because I had this GPS enabled camera that was just not that good. So the photos are great but the GPS wasn't that great so I said well this is a problem for OpenDroneMap naturally. Now I didn't have OpenDroneMap far enough along at that time to actually use it so I might have used a commercial software package to do this but we won't talk about that. So here's a subset of those points, they look perfectly fine, I mean other than not being in a straight line because I was actually walking along, a sidewalk along the road so if we take them into QGIS and just do a quick heat map we can see we've got duplicate points, right? This should be just all one continuous red but we can see this is like one point, this is like two points, this is like I don't know how many points and then you get here and you got basically every time I took a photo it wasn't properly updating the ephemera from the GPS and so in addition to being wrong it was really wrong. So feed that through, that structure from motion approach so we take each of those photos, we match them to each other, we match similar features in each and now we know the relative camera positions all the way along that sidewalk. So now we've got not in real space, not in any coordinate system that we know, but we've got some sense of what the spacing of those camera positions were. So cool, that's fun, what can we do with that? Well there we go, there it is sort of geographically, runs at 45 degrees because we're probably the model somehow optimizing it to minimize the X and the Y axis. So what we do is we actually figure out what the local coordinates are at the end points. So look at this end point and this end point and do a quick linear equation to express that and now we've got it actually geo-referenced, nifty. Now we run it through a heat map and we see everything looks about the same, it's even, we've now gone from this set of points to, from this set of points to nicely spread out points and we shifted our geography as well. And so if we, never, I'm a terrible cartographer, let's be honest. So here it is on top of the map. Here's the original points, nowhere near Tehran Road and here they are in the correct location, perfectly spaced and even with those little places where I had to walk around, groups of people or trees as I was switching from across the road. So open drone map, not just drones, not just maps, in fact not drones, not maps and open as in GPL. Thank you. I'm actually thinking from the ground based standpoint, could you take a movie camera and just sort of go around and then dump out individual JPEGs with VLC and then take those overlapping images and do the 3D effect? Yes, you can. You want a good lens, so avoid your cell phone, but otherwise, yeah. And low resolution isn't necessarily a bad thing. So although with a 4K camera, you're not really talking about very low resolution images. Well, I'm thinking HD video. Yeah, HD should work. As long as you have a decent lens. And the other caveat is there may be some work. So for most sort of consumer cameras, there's an existing database of what the size of the CMOS chip is and what the focal distance is and the relevant photographic parameters. So there may be a calibration process or some things you'll need to look up. Usually when I look up stuff, and I don't know, I haven't done it with video cameras, but... I think DigiCam has a library built into it for a lot of that. Oh, does it? Yeah. Okay. With that information. What's that? Lens fun. Cool. So then you could actually add those. Oh, well. Anyway. Search. Wow. Train of thought, and then you can add those parameters to the calibration information in the OpenJob map and then run it. So... Thanks for the talk. I'm curious about the control. Is it internal or is it not part of the program? So you mentioned sending your surveyors out when they come back with some control points. Yeah. Are you doing the matching in a separate thing from OpenDrone or is the control built into... Well, I haven't been built yet. So as a starting place, probably pass it as parameters on the command line. It'll probably be a two-step process then. Run through, create the point cloud, the mesh, et cetera, and then you run a separate script to do. Basically pass in. We'll start with just a linear approximation model, 3D transformation, probably just, yeah, just a affine transformation, so rotation, scaling, skewing, and then we can iterate from there. From the user interface side. Obviously that's a user interface, but that's not everyone's favorite thing. I don't know exactly how that'll work. Part of the reason for the... What are the core competencies of this is I hope it'll be a core pipeline and then be something people can build tools around. We will build some tools around it as well, obviously, for our use. And also, in the short term, you can easily georeference the point cloud in Cloud Compare. Cloud Compare is awesome. And it's another one of those, you know, not particularly intuitive, but great tutorials online, sort of packages. And it's not fair to say that about MeshLab or Cloud Compare. They do freaking everything in the world. There's maybe no good user experience that can be created from something that does everything. But... Got another question here. What are you doing for your Do-It-Yourself infrared channel? I haven't done anything yet. Wasn't that a infrared... Yeah, that camera's hard to find now. Yeah, that camera was made in 2001. You can still get it. I remembered it's a DSC 720X in the X because it was really, really fast. You can still get it on eBay. And if you're lucky, you can get it for under a thousand actuations. There's a few of them hanging out there. I have a persistent search there. I have not asked my wife if I can buy one. Yeah. That's why the search is still there. One day, one day, you know, maybe there'll be a little extra money and I can buy one. The public labs, guys, I don't know if you've seen their stuff. I was just curious if you were doing their, you know, film... I'm totally going to buy one of their kits. They just revised their kits in the last couple of weeks and yeah, yeah, we're going to buy their kits. We're going to throw them up on balloons. We now have authorization to fly from the FAA for a couple sites, but there's going to be a lot of stuff we want to fly that the authorization will take too long and we just, you know, it's an opportunity and we want to get out there. So we'll throw up a six-foot balloon, a 9-11 foot balloon, a six-foot balloon and we'll put a payload on it and we'll... Yeah. Actually, with their cameras, we can probably put up like a three, four, three or four-foot balloon. So... Thanks. Yeah, public lab is awesome. That's a great way to get a burden there. Cool. Thanks so much.
|
Aimed at developers and end-users, this presentation will cover the current state-of-the-art of OpenDroneMap, toolkit of FOSS computer vision tools aiming to be easy to use for referencing unstructured photos into geography data (colorized point clouds, referenced photos, orthophotos, surface models and more), whether the images be sourced from street level photos, building interiors, or from sUASs (drones).Currently no such comprehensive FOSS toolkit exists and is easy to use and install. ODM aims to fill this gap.
|
10.5446/31746 (DOI)
|
All right. Hi, everybody. I am Alan. A statement is a bunch of map designers. So actually, there was three of us who worked on this presentation. The three of us presented something similar at State of the Map, but I've added a bit of new material at the end. So if any of you were at State of the Map and you were worried that you've seen this, there will be some new material. So huge thanks to Seth for Simmons, who is also here at this conference, and Kate Watkins, who's one of our designers who couldn't make it. So they both put a lot of work into this presentation and a lot of the things that I'm going to show off. And just looking at the title, I mean, anytime you have a talk that's got tips and tricks and stuff in the name, it means it gets to be kind of a grab bag. There's not going to be as much of a conceptual arc through this presentation as Baren had, which was excellent, and probably Sergio will have as well. So this will be kind of in between those two really contextualizing talks. We'll get into a lot of the nitty-gritty, but I think you'll get a sense of how we're dealing with a lot of the same concerns that Baren brought up, that we're really trying to make sure that we're using this technical tool, but with the intent in mind of what each map is trying to accomplish and really keeping that map reader, the map viewer in mind, like how they're going to appreciate and use the maps. So we're kind of thinking of this as maps with distinct personalities, which is another way of talking about the idea of focusing on intent. But we'll get into a lot of the weeds of just the real, like, how do you accomplish something in this tool that is Cartos CSS. Some of these maps I'm going to show you are from Tile Mill 1, and some are using Tile Mill 2, which is now Mapbox Studio. We use both of those in our practice. We love all of those Mapbox tools. So Tile Mill 2 or Mapbox Studio coming built a site of things. And then in a few cases, like the first one I'm going to show, our design worked with Pinterest. We actually built it on Mapbox's vectors for Mapbox to deploy. So this map for Pinterest was designed for being hosted on Mapbox, so it was a collaboration between the two of us. And all of this work is stuff we've done within the last year. The Pinterest map was around the end of 2013. And if you're familiar with Pinterest, you know what their kind of, their brand is, what they're going for with their maps or what they wanted to go for with their maps and what they're trying to create their interfaces to look like and feel like. So we really wanted something that felt distinctive, that felt really handcrafted, really warm and friendly and beautiful. Just a few examples of, like, using really large type, using a lot of texture, like almost everything has texture in it. Because we knew that Pinterest maps, at most, we're just going to have push pins on them. You can use a lot of texture. You can use a lot of color. And we're designing maps that are going to have more complicated data overlays. We've really learned we just can't do a whole lot of texture or you have to be very careful with it. And this is all just the open street map data that has been processed for the Mapbox vectors. So we used, and we used custom fonts, which were really important, to get that sense of personality. There's, we're styling them with different amounts of fill in those fonts. So some of the really large text has mostly empty interior. Some of them are more solid. There's the two colors of text. So we have four label hierarchies. And I trimmed a lot of the nitty gritty slides that we talked about in state of the map in terms of how you deal with tile boundaries. You can do things like text avoid edges or you can tweak your meta tiles. There's more and more documentation of that coming online. So I would skip that. But there's sometimes even when you can't get your labels to avoid tile boundaries. So you can sometimes, what we ended up doing was just skipping any labels that were too long. And in Germany we found there were a lot of these really long city names. And so you can use regular expressions in CARTOS CSS. And so this one is basically grabbing anything, you know, it's dumping anything that's beyond 12 characters and setting the text name to blank. So there's a lot of cities that have just a blank name and you don't see it because the name of that town is really long. So there's a lot of like really just like things, hacks you have to do to make your map work. This is for an earlier draft of another project we were working on. But I just wanted to show a more exaggerated version of what the kind of things we're doing with text. We're just like really ramping up the letting and the kerning and squeezing those letters together, doing a lot of halos and strokes around the edges. So you can see that happening a little bit on the Pinterest map, but this was a draft of something that is really extreme. Like you can see how in the Central African Republic all those lines, the words themselves are like totally glued together. And the only reason you can read it is those outlines. So that's like putting these negative line spacings and negative character spacings. And more special casing that we did, you'll notice actually on this map that there's some of the state labels are oriented in different directions. So this is something that with when we're designing for the web and we're designing with code, you really don't have a whole lot of control over these things. Or you do, but you can't do this for every label everywhere in the world. The whole point of doing world maps with code is that you end up having to just set rules that will apply everywhere and you can't physically go and test them everywhere. But at low zooms, when we're looking at the whole U.S. or if we're looking at where labels are placed on countries at Zoom 3 or 4, why not go ahead and like customize the placement? Do what like a handmade cartographer might do and say this label needs to be moved a little bit. So for California, if the feature is named California, we apply a special rule to it. This is the power of CSS and you can actually just manually go in and do these things. So you rotate California 44 degrees and we just trial and error until it fits. Same thing with Kansas. We had to actually rotate it and offset it so that our United States label would work at the Zoom level. So it takes a bit of extra time, but that's what old school handmade cartographers would have to do. And you can do this as long as you're not, you know, your ambition is not to do it for everywhere in the world. One thing that was also kind of fun we did for the Pinterest map to make it look a bit handmade, once you're rotating things and you're nudging those labels a little bit, you can put a bit of a nudge on everything using some other value that is coming in your data. In this case, all these features have an OSM ID in the map box vector data and we just use that as a bit of a random number generator. And based on the OSM ID and we do like a mod three and just to get a little tiny angle and sometimes they're rotated a little left and sometimes they're rotated a little bit to the right. And you can see that if you were to drop these lines across. You can see Switzerland is tilted one way and Geneva is tilted a little bit another way. And this is the textures in here are really something that Kate spent a lot of time on. And the colors, she did a lot of work basically choosing a few limited colors and then using the Cartos CSS like and darken operations so that basically you kind of know, I mean you could calculate those hex values yourself, but you could say I'm just going to use this one hex value, use those built in tools to like lighten it a little bit, lighten it a little bit for this feature, like a lot more for another feature and you kind of know that these colors are going to really work together but add a lot of depth. And using the operations to like multiply colors and multiply textures. So you can apply a background texture and then you fill in your park feature with a color that is going to use this Photoshop style multiply effect. So the textures keep coming through. You're not overlapping any of the texture, but you end up with a green pattern texture and a slightly blue pattern texture. And there's really subtle things like nice drop shadows along the water. And these are some tricks that we learned from actually the Mapbox crew because they, the way the Mapbox vectors come in, you don't always have a land feature to grab onto. You have a water feature, but if you want to have a stroke on the outside of land, you have to do a few tricks like in this case, you do a dark fill on the water first and then you apply another one that's a light fill, but you put a blur on that. And the result is the edges of that light fill get blurred away and you end up seeing the dark in the background only along the edges. So you really see a subtle dark shadow on the land, but that's actually being applied to the water. So another project that we were working on that was less personal, less idiosyncratic was a map for the Golden Gate Parks Conservancy. They managed all of the national parks in the Bay Area and they wanted a nice slipping map that really reflected that national park kind of style and feel. So it feels like those great maps you would get from a park ranger when you actually drive up to the park gates. But it also works as a slipping map. And we had to integrate it with Google driving directions and then these hiking directions from another later. So we had to pull a lot of things together on top of this map. And it's all open sourced. So we have three repos that we put into the Parks Conservancy's own GitHub group. So you can look at those three repos and see the different styles we apply. So we worked on the background, the features and the labels separately. And you can see a little bit of subtle things going on in the labels. You can see some kind of alignments that you might have not noticed in other kind of Cardo CSS based maps. The campgrounds and the labels on the campgrounds are all left aligned, which was actually kind of hard to get Cardo CSS to align things left. So what we ended up having to do was create transparent icons that had a bunch of white space or blank space to the left so that the text would align left but it would align to the center of the icon. So then the result here is that it looks like the label aligns left to the left edge of the icon. So if you're making your own icons, do what you got to do to make this stuff work. If you look closely, all of those little overlooks are actually looking in the right direction. So this is not just an icon showing where you can have a nice view. You can actually see which direction you ought to be looking when you're there. We couldn't really figure out or we weren't really capable of rotating those icons correctly and also with the weird alignment thing we were doing, it wasn't going to work. So instead what we have, and actually this slide is showing how you are, these are all shield symbolizers, which were mainly used for creating like an interstate highway shield and putting a number on it. But you can use some of these lines to detach the shield. Shield unlock image means you can put your text anywhere. It doesn't have to go on top of that shield. So that means we're labeling the icon as a shield symbolizer. But as for how we rotate them, we had to create a few rotated images. So this is just a little bit of illustrator work and then a little bit of scripting to rotate them all at 15 degrees. And then given that there was a small enough subset of overlooks and our clients at the parks and servicing were willing to sit down and look at the spreadsheet of 30 overlooks and tell us which direction they all pointed, they added a field saying what the orientation was and we apply a specific marker file for each of those. So a lot of, again, this, you know, customizing things when you have a small enough subset of data that you can customize. So more recent work, this is going to be stuff that's still in progress. So it's not going to be quite as tips and tricks kind of thing. But we, one of our most popular map styles is Toner, which is, again, designed purely for putting data on top of. This black and white looks beautiful on its own, but it's also really great to put, you know, colored overlays, polygons and so on. So we made this a long time ago with this tool that was developed at a statement called Cascade Nick, which was kind of a precursor of Carto CSS. And for a long time, we just didn't have the resources to port that to Carto CSS. And only really recently did we finally find the time and we got a night grant who, and it was actually the night foundation who originally paid for us to produce these maps. So Toner is free to use for unlimited use for any base map you want. The tiles are out there. And now finally that we have ported it to Carto CSS and we can start to update the database. It will not be three-year-old data. It is now only two months old data and soon we'll start to get it going faster. And as you mentioned that Brandon Liu, who's been working with us, is also super amazing helping us port our rendering infrastructure off of one server in a basement somewhere onto the cloud. So follow up with him or follow up with Seth who's here about how we're doing that stuff in the back end. But the changes we're making right now are very subtle. We're actually just trying to copy Cascade Nick over to Carto CSS. One thing you might also notice is that we have enabled retina rendering. We're still figuring out the kinks on that, like how that changes how we apply styles. Like are we going to have to special case a lot of our styles so that we place labels differently for retina? There's some strange things that are showing up that are kind of interesting. But one of the things that I, we kind of got to do, that were the things that were bugging me, there's misspellings in natural earth and I don't know if anyone else has run into these things. So I, you know, and now that the maintainer of natural earth is at Apple, we may not see any improvements. So, you know, if anyone wants to get together and start working on a fork of natural earth, that'll be great. But you know, the special casing things, if there's a misspelling, just like find the thing that's misspelled and change the name. So in this case, Strait of Georgia was spelled straight as in straight line, not straight as in the water body. And so there's a few of those that bugged me. And so whenever the renderer finds a feature who's named Strait of Georgia spelled wrong, we just change the name. Another, another little quirk of Cartos CSS is that you can't seem to just assign a new text name. You have to add it to something. And so you have to have a field that is empty. In this case, we're using name alt, which just happens to be empty in these cases. So we're adding the string, the new string we want to this blank string because it has to be a label that is derived somehow from a field. We've also been doing, we tried actually doing toner based on the map box vectors. And there were just too many little tweaks that we wanted to do that were not available coming out of map boxes vectors. I mean, the vectors that they provide are great because they're really super efficiently designed for most kinds of cartography. But if you want to really tweak things and you want to get into your database that is driving your rendering, you can't rely on those. One thing we wanted to do was filter buildings by size. And in the map box vectors, buildings come in but you don't have the area of the building and there's no way to calculate it during at render time. So you basically have to draw all buildings or no buildings. What we do for toner is this is at Zoom 14, you start to see some buildings that have a large area. And you get interesting effects this way actually because you end up seeing more or less where industrial areas are and where the central business district are because a bunch of these buildings, which are the kind of gray crosshatched features, they show up downtown where these buildings that fill a whole city block, they show up around convention center where we are. But in OSM, there's actually buildings almost all throughout Portland and they're just not showing up because they would clutter the map at the Zoom. The other thing we had to do with, we realized in OSM there's features called building equals no. So we had to filter that and only draw buildings that are not building type equals no. But you can see what we do is we go for Zoom 14, if the area of the building is greater than or equal to 5000, we'll draw it. And Zoom 15, we kind of lower that threshold. And then by Zoom 16, we show every building. One other thing I want to talk about with toner is that we frequently found with our visualizations that when we're putting a colorful polygon in GeoJSON on top of something, it looks bad when it covers up the labels sometimes or it looks bad when it covers up the road network. So toner actually comes in a lot of different flavors. So there's toner dash background. So these are all available to you as well. So toner dash background has all the labels removed. So there's also a toner labels layer that is all transparent except for the labels. And to generate that, we use the exact same style sheet, but then we add this extra style sheet which sets the map background to transparent. And then in our project file, we just deactivate all of the layers that we don't want. So we don't really have to write multiple style sheets. We just have to put this override on there and deactivate a lot of those layers when we render. So then you can get toner dash lines which is just the roads. We also have a buildings layer. You can get the base layer to put all these things on top of. So then you just need a little bit of extra coding in your leaflet or whatever you're using to put the base layer, then draw your GeoJ sun polygons, then draw your labels and roads on top. And you can see your nice pretty labels and they're not colored orange and purple. They float on top. There's also one that, you know, even though the black and white layer, the black and white toner style is great for data viz, actually a really pale base map is kind of even better. So we have this thing called toner dash light. And in this case, we apply a new style sheet that just overrides these variables which define the color for everything. So all of the other style elements are the same. All of the widths are the same. We don't have to change those. We just say, well, instead of coloring the water black, you know, color the water, we actually take hex 0, 0, 0, and then we do lighten it 85%. You know, so you can really control those things until you get this light style that looks how you want it to look. All right. The last one I want to talk about is something that we just launched this week which we're really excited about. And it's showing that Tile Mill actually is becoming a tool that we can use for doing all kinds of maps that are not necessarily even tile-based. So Audubon came to us and they have GIS department and they've done these statistical models showing where bird species are likely to be found in 2080 under, you know, various emission scenarios because as climate changes, the area that a bird will want to inhabit will change. So we've made them a bunch of maps like this. Now, Tile Mill 1 has some really great raster support. You have to use one of the development versions. I think the stable release doesn't actually have it, but it's been in the dev version for months and months and there's a lot of good blog posts explaining how to do rasters. And so what we ended up doing, again, designing for how do we make this accessible to an end user and the Audubon's goal also was to create a single map out of these multiple datasets that they had so that it could be shareable across social media and that you don't have to send people necessarily to a web page where they see here's where the bird is going to be and here's where it was and here's where it's going to be in the summer and so on. They wanted it to fit into a single map. So what we are doing is we've got the yellow and the blue outlines are showing where this particular bird, its range is in the year 2000. So that's the current state. And then the wispy rasters, which their GIS people gave to us, we didn't create those, but we're compositing them so that the blue colors are the probability of where that bird is going to be during the winter season in the year 2080. And then the yellow colors are where it's going to be in the summer. And then where the bird is there all year round, it blends into a green. So this is just a little glimpse of how the raster styling works in Cartos CSS. You apply these color stops. So down here, stop zero, use the variable winter zero, which is at the top, which is an RGBA color. That fourth value is the alpha value and it's zero, meaning that when the value is zero in the raster, meaning that the bird is not going to be found at that spot, it's fully transparent. And then when the value reaches one, we use that variable for winter high, which is the same color but a full alpha, meaning it's totally opaque. And then those blend together. So you can blend the color of the winter and the summer together because we're using the raster composite operation multiplying. The other thing you might notice about this, if you're aware that this is coming out of Tile Mill, is a different map projection. This is what you would get just loading these things into Tile Mill normally. So this is a bit going off of Cartos CSS but like other ways advanced tips for using Tile Mill is look inside your project.mml file and there's a whole bunch of lines that you can ignore most of them, but there's one called SRS, which is your spatial reference for the map. And it's a big, long, crazy string and it's probably best if you don't fully understand it, but it is a standard thing. These Proj four strings are kind of a lingua franca of like how you define a map projection. And if you pick any other projection you want, you go online and find out what its Proj string is and you change that line, Tile Mill will start reprojecting for you. I haven't found any good UI for that to actually do this within Tile Mill, but it's not that hard to just open up that file and edit it. So, yeah, you don't really need to understand, but basically Proj AEA is telling it it's an Albers equal area projection. Then there's parameters for the tangent lines and the center point. So you can actually tweak all of that stuff if you want it to. So I've done it where you take an Albers and then change the center point and move it around. So once you get into understanding those Proj four strings a little bit and you don't have to understand them entirely, you can do all kinds of things. The other thing that I also wanted to talk about is once we're editing that project file, use whatever tools you want, whatever scripting language you're comfortable with, you can start driving Tile Mill from the command line by calling node wherever your Tile Mill is installed, export the name of your project and then you can tell it to export it as a PNG with a width and so on. So because Autobahn gave me not only four time frames for each bird, but they also gave me 300 different bird species, I wrote these scripts that just basically take my project file, swap out the data files so that it's pointing to the correct raster and spits out a huge folder of PNGs for me. So then these are animated GIFs. And if you go to their website, you can pick any individual bird and there's, we've generated 314 animated GIFs. So, and the stitching the GIFs together and putting the legend on, I would did that in Image Magics just on the command line to just like, you know, join that stuff together. Yeah. But so once you start scripting your, I think I have to click forward 16 times to start them all moving. But I wasn't counting. Anyway. Yeah. Anyway. So they should all start moving at some point. So yeah, you can start to use it as a tool to do all kinds of different things. I have a bunch of leftovers at the end, but I think I'll skip that. So yeah. Thank you. Any questions? Microphones over there? Do you feel like there's anything missing from CARDOS CSS right now? Other functionality that would be nice to have? I think, sure, there's a lot of things that would be nice to have. I mean, a lot of these were examples of like kind of, we're running into a limit of what can be done and wishing for a bit more flexibility. Like we'd like almost all of the values to be able to be defined programmatically and not all of them are. But my understanding is that a lot of that is just limitations in terms of how CARDOS CSS compiles into Mapnic's style sheets. And so it sounds like it would require improvements to Mapnic and that's something that I don't know how to do at all. So yeah. I mean, it's just great what we can do and yeah, I think I have nothing specific to ask for. Yeah. Thanks. So is it possible to define complex layouting inside the labels? For example, if I want to represent in a label's text horizontally on one side and vertically on the other one. I'm not sure about that particular combination, but there are some pretty powerful layout options. You can tell it to, you can give it a list of orientations that you prefer. And for example, if you're labeling the cities and you want them all to be positioned to the northeast as this kind of standard cartographic practice, you can say try that first and then Mapnic will do that for the first cities that encounters and then when it tries to label a city that would overlap another label, it will drop to your second choice and your third choice and those kind of things. So that's kind of control we tend to use a lot. But in terms of applying a different orientation, you would more likely have to add a field to your incoming data so that you would look for labels that have a tag horizontal and then you could apply a different rule to them however you liked. Yeah. Okay. Thank you. This is a really basic question, but I'm realizing that I know how to implement CARDO-DSS in TileMill and CARDO-DB. How do you implement it in a standard leaflet app? Well, you don't really implement CARDO-DSS in a standard leaflet map because this is really just for the rules for creating tiles. And CARDO-DB creates tiles for you on the fly, TileMill creates them for you in one batch and you upload them. Leaflet only can show tiles that you've already made somewhere else. Yeah. Thanks. Can we get Sergio to start? Also, another basic question. Were the animated GIFs that you were showing, were those just GIFs, were those just composites of different ping files that you exported from TileMill running as GIFs or how did you make the animation? Yeah. Just using the scripting of TileMill, I just generated four GIFs, four PNGs, converted them using ImageMagic into a GIF. So TileMill didn't know anything about the animation part, it just knew that it was creating a bunch of static images. Thank you. One thing I noticed that Stamen has a very good idea of mostly putting the things you make somewhere on GitHub that people can look at them. I noticed you had that for the national park maps. In general, how do you find your, because your client actually also has to like that idea. Exactly. Yeah. We try to talk to the client upfront, but often we forget about whether or not they're going to be okay with us open sourcing it later. So we do have a lot of projects that are not open sourced that we would like to. With the parks and servicing in particular, they were really interested in making sure it was going to be open sourced because in their mind, they're one of the few well-funded parks organizations and they knew that the resources they were putting into this, they wanted to be reusable by other parks. So in some cases, putting it online is like a selling point and that's part of the plan from the beginning. In other cases, it's more like, oh yeah, this is really cool. We should have asked them if they were cool. Let's ask them now. Yeah. Thank you.
|
CartoCSS is becoming an ever more popular Ð and ever more powerful Ð tool for cartographic and data styling. In this talk, Stamen designers and technologists will present some tips and tricks to make your next design sing. Tips and tricks covered include, but will not be limited to: pixelation, use of dingbat fonts for texture and markers, post-facto label adjustment, alternate uses for text symbolization, where to find and use entropy, blending, and geometry manipulation.
|
10.5446/31747 (DOI)
|
Okay, so Good morning everybody. I'm Sergio Alvariz. I'm co-founder and head of product at Cartodv I have prepared more kind of philosophical talk as an intruder in this community that there are not so many designers here so I try to give you like a different point of view on how we believe in the company that open source will evolution and And we'll be in the future. So first of all A bit of background So I studied computer science and I work as a designer when being a developer was kind of cool So we were like discovering the informatics and we were discovering like hackers and all these kind of stuff that never existed But that was cool enough to to be on films and and books and so on at the end in the university After the university you put a suite at I and you become a like a real developer now know that kind of cool Thing about but yeah, but but the most important thing is that I mean I studied informatics of computer science because I didn't believe there was another option for me because design study were not official Were really hard to find very good places that did you how to design I Realize now that it was a really great decision because I I working a team full of engineers and Know how things work Below the design layer is very important when you when you start designing a service or a product or a website So there is a quote by a teacher by a professor and a university that is that in 19th century Culture was defined by the noble 20th century culture by the cinema the culture of the 20s 21st century will be defined by the interface And this is my favorite quote ever because it's like Saying that we all people that are in the room right now will be defining the future and that's kind of a great opportunity right so is is is so soon to to to be working on developing user interfaces or Interfaces that that has not be on a screen like for example square for payments They are changing the way we interact with money This has been as a hardware piece. They they are changing how things work. They are creating a different interface for for doing things Twitter has changed how people communicate how media publish content So they and this is an interface to is a is a what is a different way to interact with technology content people or Google Glass which is more of the most strange use cases right is a new device that you put in your head and you're You look a little bit dumb with it, but it's changing the way we interact with information too so I Think that When when we started building something on when when I start to to think about design the main goal is to improve People right is to make people better by using your product by using your interface by using your technology So I think every single thing that you that you think on building will start from from this So this is the main goal of everything that we do now. So Our CEO have a heavier which is not here We were launching a developers program. So he he did this kind of cool quote It's like the geo is not one of it's not one of our application with a million Buttons, but a million apps with just one button He's kind of reflecting how we believe that the geo industry is gonna change in the future So this is gonna be democratizing a lot So a lot of people is gonna have access to all these technologies. So it's kind of important that we build interfaces adapted to those particular cases and This is very very important because then you need to to start approaching open source as a different thing. So We'll go back later to this but but you need to start building products not only technology or libraries or or software So I think that in the university again I had the feeling that we were using open source because it was free We were a student and we wouldn't have any money But I really think that we are using open source because we really believe that is better that open source software is gonna change The wall and we are gonna buy by contributing all to the same piece of software We are gonna create a better a better piece of that so I Think that that this is changing the way we perceive it So this is one of my favorite quotes too is that people in North design that in North people So if we are competing with commercial solutions, we will have to start behaving like they do Not in terms of selling or in terms of hiring a lot of people or in terms of wherever So we will start we will have to start designing products. We will have to start like Worrying about how to design things. So this is a real use case. So this is a Greek year Our idea is an Android application that allows you to do precision agriculture So you buy it for 1000 bucks and and you receive a kid the farmer receive a kid in their house Where we're a Bluetooth receiver GPS antenna and an application and a tablet with an application installed in it So the aggregate a developer And did this and then he starts sending it to people so farmers receive the kid They look all pieces together and they call support. Hey, this is not working So it's kind of the thing that you expect when a farmer is trying to to interact with technology. So we were like Hey, this is not working Have you connected it correctly? Yes, I do. Have you should cheat it on? Yes, I do and so on so The the the guy that was in charge of this product couldn't go there and help the farmer because it was so far away So he decided to implement a set of sounds that were reflecting Until which state the software was set up So if we if you switch it up it makes a song it's switching on it makes a sound if you just plug the GPS antenna It makes another sound if you just plug their the Bluetooth receiver It makes a different sound so he asked people to call him when they being close to the device so they couldn't like him That would be something that normally happens So this guy saved a lot of money by not having to go there and then take a look at what was what's running along with the software And the good thing the best thing is that this guy Was not it was not at the designer. He's a developer. He is our lead developer in carto db He has decided this site project and these many things that at the end of the day designers are developers We are not too much difference, right? We are really similar people So we are aiming to to solve a problem. We are aiming to find solutions to particular problems so What we did with carto db? So we we think we start thinking about carto db. We really wanted to to do like different things so we We said we really didn't I mean we were not sad on a table. We were that doing prototypes but we thought a set of things that we wanted to achieve with the software that That made made us to create a design driving company because we our objectives were really focusing on people so The first of them is that we wanted to prove decision-making By creating a good tool for analyzing and visualizing data maps and With this we one of the main objectives for us was to have people that is creating or that is telling compelling stories with important Stuff so we wanted not only people that is that is that has some skills on the geo Thing that we wanted journalists scientific people We wanted designers Storytellers wherever to use it to use the platform So we needed to to to design it in a way that anyone like they do like they use Excel or or another piece of software They could use car TV so The first thing that we that we are trying to do is to remove our risk between people and data By humanizing the relation that you have with the data So nobody understand our next advice with with a one million rows. I think it doesn't work on Excel Honestly, but yeah, so we wanted to create a user interface that humanize it that makes it more natural to interact with the data And I think was we wanted to turn data sets into API's So we really believe that opening data is gonna change the world We're gonna improve a lot the way that we are that we are working with data But we wanted to do in the proper way so we we don't want we don't want the people to put data as a CSV files or PDF Files on the internet we want them to to be creating API endpoints. So we wanted to we cool Query them in any moment to create applications of the profit and also we wanted to allow people To focus on problem solving we wanted to our developers to build applications on top of the platform So they they have not to worry about maintaining a posi is installation or servers or distributing tiles or wherever They can just focus on on solving a problem that they are experts on it. So There are kind of five principles Design that are really spread on the company that every single person in the company has been aware of they are pretty simple So is this is kind of easy stuff But I wanted to say with you in case you in case it could help help you So first of all is that design for us starts in the blackboard or even in a wiki. So One of the main things that I that I really hate of designers is that we try to over complex the things that we do So we try to defend ourselves ourselves by saying that we are doing some crazy stuff We are researching a lot of things and and that's not a reality I mean sometimes we do but most of the times we we don't do it like this. We we just Try different options and we just take the one that we likes more than we implement So why not to start on a more friendly environment for people that is not a skill like a designer? Like it's a wiki page on github or a blackboard where we all draw our ideas So this is for example one wiki page on one of our repos. This is how everything starts on carto db It's just a text defining some kind of requirements and we start discussing on it And and that's it. This is created by by my partner Javier, which is a is not a designer Or he is but he doesn't know it. I think he doesn't know he doesn't want to recognize it, but yeah So the second one is that the same is improve the workflow without adding complexity So for us, this is kind of the most important things on on UI design So we really want to remove friction between data and people So I'm very proud of this feature because it was developed by me I have to say this is the disclaimer is the only feature that the team has allowed me to commit to production actually But it's the drug and drop. Yes. I gotta leave it for a while because I want you guys to love it as I do So it's awesome. You just take your file drug and drop it into your dashboard and it gets import Awesome, right? Say yes. Say yes. If not Okay, okay, so But this is this is kind of stupid because in terms of implementation, this is really really easy But it's changing the way that people interact with the platform. I remember we were with a client on a on a very fancy house on the latest phosphor g on Denver and he was like, you know, what would be amazing if I cool dragon drop a file and then whoa Well, wait, wait, this is this is working. It's already implemented was like, oh, perfect so this kind of of the most Is the smallest feature implemented on Carti D maybe but it's one of the ones that people likes more so Another thing about this principle is that we need to implement things on top of technology that will allows us to scale to infinite So everything Carti DB is based on sequel and Carti CSS So if you don't know how to how to code Carti CSS, you can always use the wizards You can always generate pre-generated by by clicking some buttons, but if you're gonna go farther you can actually Improve it just coding in Carti CSS or implementing some of the awesome tricks that the statement creeper for the for the past talk so one of the most important things is that for example, we are now implementing vector rendering on Carti DB and One of the things that we could add to the vent of rendering is mouse events So what happened when you put the mouse on on a point? So we all want the point to grow right and and shine and do like boom and all these kinds of work effect So we are implementing that as our carti DB extension so you can use it on the carti DB editor But you can also on the carti CSS editor, but you can also click some button and activate it and Who is not like really bored of working with info windows? This is kind of the worst thing to code on a map, right? It's really complex is so we wanted to to simplify it by adding some some parameters so Ordering and so on drag and drop another time so make it really easy, but also Very scalable because you can always customize the HTML that is powering during for window So if you want to use the wizard you are not a Technical people a technical user you guys just click a few buttons But if you want you can go farther by by working with HTML that for us This is key we don't implement anything that is not based on those technologies because we are afraid of that like like not not Scaling as the product has to scale So the third one is that design is about the singular pieces. It is about the total space so This is kind of when you are designing we are building a piece of software you have to you have to think on it as a As a bigger thing I mean if you if you are thinking implementing a feature the first thing you have to do is how is how is this feature gonna? Impact the product so you have to think not only about the user interface You don't have to think about the code you have to think about Every single line of code that you generate with a new feature is a line of code that you will have to maintain So it's important to think that sometimes the sign is about deleting So one of the most important exercises that we do are with car to DB is about removing its stuff So then we have less bugs and we are all more happy But but this is very important. This is like adding more features is not necessarily Designing more or designing better. Sometimes you really want to focus on on a particular thing and and do it really good This is my favorite one because it improves my life a lot It's like if you see something that you like it a lot just copy it. So I mean but do it I mean Do it do it. It's really good. So we copy that for example This is a slack is the is the software that we use for for it's like hip-chat But cooler and more expensive of course So this is like it's like has this hidden feature that is really good that you code an X em all an X a decimal Color and it translates to a small box. So we say this is cool We should implement this in our cartos CSS editor So now when you call that color it gets like surrounded by this color area So you can click on that and you can open like a visual editor for that color We designers tend to think that people knows that this is a red But no Trust me. No, if you don't do that people will end coding a color like ff Oh, oh, oh, which is a really horrible one. Oh, oh, ff. Oh, oh, which is a really horrible one ff which is even worse so so this has helped us a lot and The best one so you need people who want to work a high level but who can go all the way down to the atoms This is very very important. So we try to hire people on the team that Even if they are engineers or salespeople or wherever we need them to to focus on a small details because I mean design is detailed So we want people to love and to implement things in a high level of quality But we need them to to think about the the complete view of the thing. So this is this is very very important so my thought was about how to kind of Create like a design driving company without needing hiring designers So what I wanted to to let you know is that you don't like to hire you don't need to hire a designer in order to have a software that is Design in a better way. So you just need to change a little bit your mind. So Just for concluding If we won't design to be at for from the pop and so we need to define it correctly. So I don't want you to define design as colors typography is Forms functions, I don't know wherever so design is much more than that design is how you think and your product design is How you communicate it how just playing it and also how people will will interact with it. So this is Something that is really I mean, I just hope on next force for you We we search our opinion that we have more designers telling our their experiences on opensource software But this is key because if not, I think that open source will not be visible enough I will not have this critical mass that we all need for to to be like kind of success with our products And I think it's all thank you very much Yeah questions I Have a couple comments on the question so first of all When a friend I I'm a cartographer myself I've used a car to DB a little in the past but when a friend had to make a map with who has no Mapping experience had to make a map. I didn't think of car to DB right away. I thought of like, you know, Google Fusion tables and geo commons and we tried a few different things and ended on car to DB Because it's really the only thing that does what you do and you do it well And so I really like the principles of laying out a very easy entry point, you know And then being able to dig down and like use carto CSS. It's very well done Commend you for that. But having said that I take issue a little bit with you know, I get my hackles up Anyone's anytime someone says you can't think about projections And and to me it's you know a situation where you are in a position to be educating people Right about good design principles in particular in relation to cartographic principles Projections is a huge problem in in open source and in any you know modern, you know open source or closed source coding, you know web mapping stuff and So I guess you know in terms of making just for example making a core plus map on on web Mercator, right has some huge theoretical problems with it has some huge like problems with misleading concepts that you're communicating people so so Have you thought about like What you're building is a platform for education and I'm not talking about like teaching people the equations to create a projection You know, but like teaching people appropriate map design. Have you thought about it as in that from that angle that frame? Yes. Yeah, yes. Yes, we do so we started with web Mercator And we started to remove all complexity behind projections because we really want people to create maps in a really quick way So we can engage them because they are creating a lot of maps So then we can help them to improve them so projections is something that we are aware of and I Totally agree with your comment and it's something that we will we will take her of In the midterm, I would say that we will start working with I mean different projections We also want to to map data on the poles for example and we we know that we that with web Mercator you can't and and also to provide more More insights about which map would be fit best For your particular interest for your particular objective. So yeah, we will work on that No More questions Don't I put that projection thing in order to get more? You know more coming Didn't work. Okay, so thank you very much
|
Open source geospatial is in an Enlightenment era regarding design; many teams are breaking away from tradition and embracing simple, clean, and usable interfaces. For a long time though, open source geospatial software, and geospatial software in general, seemed to pay little attention to the knowledge of the design community. Here, I will discuss why design has taken a backseat for such a long history and what is suddenly changing that brings it to the forefront. I will also talk some about the design decisions that have gone into the CartoDB user interface and many of the mapping options we help our users find. This talk will focus both on the history of design in open source geospatial software and where it is heading in the future. We will also talk about how design itself is inherently open and how we are working to improve open source software design through our own contributions and through this discussion of our process.
|
10.5446/31748 (DOI)
|
All right. Boise. Baltimore. Baltimore. North-East California. All right. Seattle. Seattle. Port. Idaho. Idaho. Back in the back. Madison. Madison. Madison. Yeah. Why do you want me to come? All right. Switzerland. Switzerland. Okay. San Francisco, Portland. Anyway, so the Intertwine Alliance is this, it's a member of the Metropolitan Green Space Alliance. There's seven of them in the country, regional sort of coalitions. It was formed in 2009. It's really to ensure that the region's network of parks, trails, and natural areas is completed and cared for, and to help the region's residents connect with nature and live active, healthy lives. I work for Metro Oregon across the street here. There's a number of people that are members of this alliance, so I'm just representing them today. I work with Green Info Network, where essentially a nonprofit who works with other nonprofits providing GIS services. So print maps for parks and rec services, interactive maps for social justice and conservation sort of thing. This definitely fits in that third category. Active maps for conservation. So going to discuss very briefly what the RCS itself is. It's a localized thing. You might not have heard of one. Then the viewer software itself, which helps them to, provides information for enabling decisions. And then a little bit on the tech stuff, depending on how interested we are in the technology, as opposed to the actual real world and human uses for it. And then discuss some of the real world use cases for it and the value they've gotten out of it. And I thought we were going to focus more on the tech stuff. Anyway, the regional conservation strategy is an initiative of the Intertwinal Alliance and it's focusing on biodiversity conservation. So why did we take this on? There's been a number of individuals in this region that have been working on this for decades, organizations that have been working on it for decades, knowing that the importance of conservation goes beyond political boundaries. And there's been a lot of efforts. The problem is there's been no effort that sort of comprised of the whole region. We've had statewide efforts. We've had citywide, great prioritization efforts. We've had ecoregional efforts by the Nature Conservancy. But again, nothing sort of comprised of the whole area. And that was a challenge when we went to funding and sort of tried to share our vision. So one of our main things, some of the reasons we did this was to create that shared vision for biodiversity conservation, to increase the impact of our work, to connect jurisdictions and existing planning efforts, and to increase funding opportunities. Which one's the arrow? The spacebar over there. All right. So it's really comprised of three main elements. The Regional Conservation Strategy, which is a 161-page document that really outlines a number of different chapters from biodiversity corridors, climate change, existing conditions. And it's a great resource that we have available for everybody. There's the Biodiversity Guide, which is a 360-page document, which is also, you know, this is the science behind the RCS. And so these are available online. And there's no small effort. It's a three-year project, 161 volunteers, writers for this, 75 different organizations. So it's a big effort. And then we got the Habitat Modeling and the data output. We also have a executive summary, which any of you guys can take one here. I got a few. And we have the RCS viewer. And we're stuck. Okay. There we go. So what is this, the region that we're... We're not up there. Did it just blank out? There you go. That's... Okay. Odd. All right. That's... Okay, then. That's interesting. I take blame for that. I hate the overclass. All right. There. Thank you. Then can we do just the screen change instead of all the notes? That's strange. I've never done that before. Yeah. All right. We'll see if we can move on from here. We can work from this. Yeah, I guess we'll just... Can we just move? How do we move forward? Use the arrows. All right. So here's... Hopefully the animations will work. Anyway, here's our region. It's a big region. It's 10... Parts of 10 counties, two states. I think it's like 33 sub-watersheds. And our first key decision was creating a prioritization effort. It was creating a better habitat model. Most of our previous conservation or prioritizations wasn't able to depict high priority areas in an urban setting. And you see this all over the country. If you don't have a good land cover, it's just really hard to model prioritization or high value habitats. So this was the results of our initial habitat model. And I'm kind of confused on what we're working here. But as you can see, the urban areas here were not highlighted either. Most of it's out towards the farther reaches up towards the headlands. And so we took a different take on it and we clipped this raster output. And what we found is that then we could re-ramp the scores. And these scores allowed us to sort of bring out the urban quality of habitat. This took a lot of GIS manipulation and whatnot. And it was just somewhat time consuming. These datasets were huge. So one of the key decisions was not to draw polygons for priority habitats. And that's kind of a unique decision that we made. The reason being is because you're dealing with five meter pixels of results, we would get clumps that were very high value habitats. But we'd have little circles all over the region. And so we decided to provide the raw data as an output. And this was great for a GIS specialist who could do some of the work and sort of look at their different areas. Do we miss? Yeah. Sorry about this, guys. I think we're on the wrong presentation. Well let me try to summarize this. So when we had this output, I had a few other slides in here. What did, is it provided you the opportunity to see that if you looked in the very urban area here, where we weren't seeing any high value habitat, we were, kind of last year, Greg. Well really the upshot of it is that within the urban areas, you still have lower and higher value habitat. And then the city of Beaverton, for example, looks like it's very poorly on a regional scale. However, if you zoom in on Beaverton and use that as your local focus, there are higher and lower value regions in Beaverton. And sometimes that's what you're really concerned about. Maybe the land in Beaverton doesn't have the highest value compared to the headlands up there, but there are still areas and opportunities within Beaverton where your environmental impact and your habitat value have their own variation. So the question was, how can we present the tools and visualizations and the ability to do these GIS operations without using ArcGIS and advanced GIS knowledge and having a GISP on hand? Yeah. Yeah. So then I'm going to let Greg talk about the viewer. The big challenge for us was really to empower the local user to define an area. Then this area that they defined, they would re-ramp the values of the model and the output that you would see, the high value habitats per relative area that they selected. And so it was really cool while we were working with this in GIS, but we needed a tool that would get this to the decision makers, to the policy makers who don't have GIS experience. And so that was the challenge we went to green info with and we're super happy with the product they came up with. I'll let him show you some of that and then we'll show you some examples. Sorry about the earlier glitch, we kind of lost it there. So at its core, it's a browser based interface over the RCS habitat valuation model, park lands and other such data sets. The goal was that you don't need any specialized GIS knowledge nor even software. In fact, even if you don't have the ability to make a shape file, if you can just hand draw on the map, it'll still be able to do the calculations and get your habitat valuations, stats and so forth. So you can define an area in a number of ways. You can select a bunch of prefab ones such as city boundaries and counties and so forth off of a menu. You can upload a shape file or just go to hand draw. What you end up with is something like this. This here is habitat value shown on the full regional spectrum, that is to say contained within the entire universe of the intertwined area. As you see, most of this is comparatively low value. However, over there near the grassy areas, it ends up being somewhat higher value as one would expect. But what about local focus? Within local focus, you see a whole lot more area that's indicated as being high value for this area. Switch back and forth here. And you can see regional perspective and local focus. It's one of the taglines. Likewise, if we want to show only the highest value habitat, that ranked in the top 10 percentile of all, there is some significant area here, significant opportunities and focus. And yet, if we look at a local focus, we discover even more opportunities for doing good locally. Having selected an area, you get back a fairly rich set of statistics, most notably land cover. So, again, forest land, riparian land, places that are already developed, places that are already conserved, habitat valuation and so forth, nice, rich set of pie charts. They find this immensely useful. Most of this data is baked enough that they can use it as is in their reports. Perhaps most importantly, however, into your local focus is the ability to make annotations on the map. Having already selected the area, they're able to define known backyard habitats, color code them, add text labels, even legends and so forth onto the map. Again, using simple drawing tools, you see the icons there up on the top. So not only do you get your local focus, but you really get to make your map speak by adding your own annotations. You can literally circle it and write, don't dig here. The ability to save and share the state of the viewer at any given time, including your annotations, your maps, the whiteout and everything, all stored in this relatively tiny, early idea. It loads back up, loads your previous state, which means you can pick up where you left off and save another version of it or pass this URL onto somebody else and they can see exactly what you're seeing, up to and including that red circle that says, don't dig here. If you're really into paper takeaways, which a lot of us are, you know, I can give you the instructions to follow this on an iPad, but, you know, sometimes you just want a PDF that you can print it out, copy, then you can just hand away 100 of them. Well, the PDF is extremely rich. They're very happy with this. And again, it fits a lot of their reporting formats and they find they can use a lot of this stuff almost as is. Use your local map, regional scaling, color ramping, all the same statistics you saw before. So a quick overview of the technology stuff. It's underlying. We use good old open layers, map server, map box and so forth. Now this, if anyone's more interested in the tech stuff, just... What? Yeah, the thing seems to have these little screws sticking out of it that's making it not seat properly. Come on, guys. All righty. So if anyone's really interested in the tech stuff, just go ahead and ask and we can chat about it later on. Again, map server, open layers. A good old is what we use for the raster calculations. This is a NumPy library which uses the GDAL tools to do queries on rasters. There's other technology for doing this. At the time, PostGIS raster was not considered production ready, so we went with a Google at the time. These days, if I were to do it again, I'd probably use PostGIS raster. It's very performant and has much lower memory usage than doing it in a Google. Why do we use open layers? Just because I had boilerplate code for a lot of these uploading and editing functions. We get in leaflet, again, equally valid, but we had boilerplate code. Map box, wonderful choice for our base maps. Simply because they're a remotely hosted tile service, very performant, very reasonably priced and very attractive base maps without having to generate them all ourselves. There was some talk of using Bing and Google base maps, but again, they have APIs as opposed to simple XYZ services. The programming's a little tougher and, well, frankly, they change their APIs in their terms so often, better to just steer clear of it and go someplace where the terms are really clear and static. Map servers, what we use for the server side rendering of these rasters. There's other technologies for it. Map servers is the one that we chose. No strong reasons, but most notably, variable interpolation is very valuable here. Having selected an area, the min and max can be passed back and then used to set the color ramp with less programming and overhead than, say, doing it with SLDs and GeoServer. The PDF printing uses a toolkit called WK HTML to PDF. This mouthful, the WK means WebKit. Somebody actually ported the WebKit rendering engine into a bunch of command line utilities and shared object libraries for Linux, which effectively means that you can take an HTML document, feed it to this puppy, and get back a PDF. Use your PDFs in HTML, and it even supports JavaScript, D3, Google Maps, Open Layers. It's absolutely wonderful. You can start with copying and pasting your browser side code and be 50% there. D3 is what we use for the pie charts here. High charts is what we'd probably use next time. It's also a really good one. D3, though, was fairly expedient for making these pie charts that we did need. Now, the RCS viewer wanted to discuss some of the real-world uses and impacts so far at, and this is where the real value comes in, of course. One of them, we're going to see this one, the Simple Land Cover Survey. This is the city of Tualatin. Yeah, so this is a great tool, not just for non-GIS, but even for GIS folks. You'll get calls and say, you know, to get quick statistics of a city's land cover, it's 33% tree cover is great. You can compare it to the city of Portland's, what they've done in the past, and, you know, they're very precise about their percentage. This tool came within a percentage of what they are claiming their tree cover is. So when you compare two different cities, or even what the nationwide goal for urban tree canopy would be, this is a great tool. Not only does it do this, I don't know if there's another example, but that same area of the statistics, say, how many parks do you have in there, and give you how many federal, federally owned, state-owned, locally owned parks, and that's great. So we have some pre-canned geographies in here, all the city jurisdictions, all the watersheds, but you can also just draw in any shape, or you can upload a shape file, and it will clip that. And the results are within a couple minutes. I mean, it's pretty fast. And it's just great to have that statistics for whatever you need. For a big complex area, calculation time can run of 60 to 100 seconds. For this one, since it's canned, and was pre-calculated last time, someone asked for a to wallet, and results are in about three seconds. That beats ARCMAP's time. Let's see, decisions, like this one, for example, a bunch of grantees looking to do improvements to, I forget what the program was specifically, but. Yeah, so we have a program in Metro, and people, we get a lot of grant applicants for some habitat restoration. So we just uploaded these shape files into the program. We labeled these, and during the grant-making decision process, they were able to zoom into one of these areas. Since it's labeled, it's number 23, and you can say, oh, you know, what's the habitat around here? Not as the answer, but certainly as sort of some background reference information. Is there parks around there? Is it a key linkage between two protected areas already? So it was very helpful for them, just in the discussion of these grant applicants. This is beautiful. Trail planning, again, with the annotations. Here's the planned trail. Here's an existing trail. Well, the pink trails are proposed, I believe, and then the grain are existing, or vice versa. But yeah, to be able to annotate this and then send this across to another jurisdiction, and they'll open this URL that generates, and all this annotation, the view, the opacity, is all right there. So now you can have a conversation on the phone looking at the exact same thing. And that's been really helpful. And then you can save this if that person doesn't answer the phone or whatever. You can just say this, say, I'm going to call them next week. When you open up that URL next week, it's still there, again, with all the annotations and everything you were looking at. Not like a GIS whiteboard. Yeah. That's what we were trying for. Another example, backyard habitats. Do you have your locations? Do you have your stats? There's more to it than the stats. And this was for Audubon. They just were curious of what's this Laurelhurst neighborhood look like? We do backyard habitat. Let us upload our shapefile or KML and see what the existing, where our existing backyard habitats certifications are, where our, I don't know, I can't read them. But anyway, just the different variables of what their habitats were. They could also get the statistics. And it's just interesting to see the neighborhood versus neighborhood. What's the tree canopy here versus this one? And then to see, if you were looking at that from the regional, you would not see any high value habitat. This is probably taking the tool a little too far. I mean, that's just parkland. That's why it's green. It's pretty easy to tell. And the rest are trees versus buildings. But if you zoom out a little bit more, you will see connectivity and sort of some trends. Yeah, so that's it for example. But it really has helped form and leverage partnerships. It's very attractive to funders. When you go to Washington and you take some of these and you pass these out, it looks official. I mean, it is official. But it really does say that we are working together. We're all working on the same thing. And I didn't mention this earlier, but this process, this regional conservation strategy, was never meant to sort of take place of any previous efforts. It's really to complement their efforts. So if you're a city of Portland, you probably have better data than we were able to use in this program for this modeling just for the capacity. But this will also show it sort of validates or complements what they've already decided. These are our high priority areas. And if there is differences, you go check it on the field and say, well, why did they say this is high value? This is going to make mistakes for sure. We don't know where oak habitat is. We're going to pick up golf courses. So one thing we'll do is we'll shade out golf courses or cemeteries and we'll take those back. But we have that in here as a layer so you can just make those opaque. And so that high value habitat underneath those isn't showing up. And I think probably the biggest benefit of this whole effort is just the partnerships and the collaboration of all these groups and organizations that have worked together. And to all be working with these same documents has been very valuable. Having a tool has really helped mostly just to give people access to this information. If there's only 10 or 20 GIS people using this, they can do great sophisticated stuff. It's not reaching the audiences that we want it to reach. So I think that's kind of it. If anyone has any questions on the technology, it would be just the conservation strategy here. Hey guys, as you know I've used this data on about three different projects in the Portland metro area and I think it's great. It's really, really powerful stuff. One question I have for you is when you set the local focus, you said you guys were rescaling it. So setting the denominator based on the highest local score in the RCS. And the other question I have, this is getting fairly technical with the data, but I've been dying to ask you guys this, how do you integrate the riparian data set with the upland data set because they overlap? The first one, yeah, we take the min and max within the area, then do a cumulative area graph to find the percentile breaks. So within the local focus, that really is basically how it works. It's not simple quartiles and stuff. It's a cumulative area chart. Second one, how do you integrate the riparian data? The second one, the riparian is integrated now into the upland. So it's kind of like, I don't know if it says upland on there. Yeah, because originally we were going that way. It was going to be how the birds fly, the fish swim, and the critters crawl. But we ended up having some issues with defining those. And so the riparian is really riparian habitat and is a great layer on its own. But it is also integrated into the high value habitats. Okay, so you can just use the high value habitat. You can, but I mean it's, so then your riparian values are going to be only a portion of what it is. And if you go out to Washington County in the flood plains, you're going to see very, I mean, Brian Shepard Clean Water Services, I'm sure he would say I use both, because that's not going to pull it up high enough. You'll see slightly variations in the riparian. Anyone else? All right. All right, well thanks guys. Sorry about the glitches on the presentation. We kind of missed those.
|
The Intertwine Alliance has formed a Regional Conservation Strategy (RCS) for the area surrounding Portland, Oregon. The goal of the RCS is to guide the development by providing a high-resolution model of habitat valuation and land cover.The RCS Viewer allows website visitors to access and visualize high-value habitat and land cover, without specialized GIS knowledge or tools. By selecting, uploading, or drawing an area of interest, the user can bring up a feature-rich summary of habitat valuation and land cover.This presentation covers some uses of the RCS Viewer of interest to conservation-concerned planners in the area, as well as some technical details of interest to developers.
|
10.5446/31752 (DOI)
|
I'm Alejandro Martinez. I work at Systems at Car2DV. Well, there have been a lot of talks already about Car2DV, which is, I think, the easiest way to get your data in a map in a couple clicks, but just uploading the data, be it a CSV, a Chef file, or an Excel file, and just in a couple clicks style it and get lots of analysis and insights for it, and sharing its visualizations. But I'm going to talk about more about the insights on the headaches we've gotten to get this thing to work properly, because we've run pretty fast, and we've run from being a small system based on completely open source stack. And actually, the whole Car2DV, or almost all parts of it, are open source. And we're using lots of awesome maps from the open source community of Postgres, Postgres, MapNIC, GDL. And they are building blocks. And we're building on top of them. And also making an open source platform, which we're using to serve millions of tiles every day to all parts on the world. This is actually a heat map of not from where people are that are looking at tiles, but the tileset that we're actually server of the tiles we're actually serving around the world. This is a day that had a bit of activity due to some issues in Spain. And we're serving a lot of dynamic data. But the magic thing about us, because we're open source, and anyone could replicate us pretty easily. But the magic part we have is the maintained cloud environment. We just throw it. You just give us our data, style it, and customize your SQL, and the way you want to visualize your data. And we'll serve it for you. And we'll be sure everything keeps working right. And in this regard, Car2DV is different to other solutions, I think, because we're dynamic, in the sense we're too dynamic. As we allow users to do anything they want with the data, using the SQL API to update the data and make it automatically refreshed and rendered again on tiles. In this case, this is an example of an application made by Fulcrum, which is using a mobile app for gas inspectors to log where they've done inspections of gas infrastructure. And it's serving hundreds of gas inspectors on the map that is refreshed in real time. So you can see which inspections were done yesterday, today, filtered by using SQL, in this case, in Car2DV.es to filter which section of data you're inserting and changing style. And of course, we're highly dynamic, but sort of dynamic, because all the information you see can be updated in real time and will be refreshed in real time. But we also have to be aware of an important component of this dynamic part, which is that most of the dynamic maps are static in the data nature. Even this animated map, which is during the Super Bowl, which is one of the huge numbers of tour maps, the last tour map I will show you today. Actually, it is static in the sense that the data doesn't change. And it is served to millions of people, but the data behind it never gets refreshed. It can be refreshed if the app of the map decides to add a day more of tweets. It will be rendered. But we actually kind of, most of the maps and things we're serving, being sort of static. Because if not, it will be rendered in the same thing once and once and once again. And we'll be just wasting CP resources and rendering and change things. And well, that's why we need a cache layer. And it's a very complicated cache layer, because the way we have it, any moment, any one update is there. We have two layers of caching. One of those is the some internal HTTP caching we run on top of varnish. And then on the ACID and caching, using Fastly, actually, on hundreds of points of presence around the world, that we have closely monitored. For example, this is a day worth of a request from one server. And actually, of this map, which is all of the data we're serving, we're actually just rendering a couple of this. And even though a lot of the data in blue, which is the cache misses, come from the internal caches. So we rely on that a lot to actually being able to scale maps to millions of viewers. Because actually, the width to write ratio on a platform is very high. Even though if you want to write, we'll share it and write. But right now, we're focusing on being able to read hundreds and hundreds of times and actually being able to invalidate with multiple contraptions. For example, this one is a bit of SQL trigger invalidating on varnish, done in Python inside SQL, being loaded by a rabbit code. I just wanted to put it because I love the contraption we've gotten around to. But yeah, the fact is we're actually very flexible. But we also have very control on what users can do. But we're flexible in the sense that any user can upload any geometries in data that they have, and then run any SQL query that gets out of their heads to do operations with it, and then render it using the rendering layer, which is our entire code called WinShift, I'll get to that later, with any SQL and CSS they desire. We're given power to the users to do big things. So they can upload their own huge geometries that hundreds of megabytes of polygons, and they run any kind of spatial operations on top of them, and they can render it with any SQL and CSS they want. Even though that, from a system side, that looks at needs a bit of a howl, actually, because most of the geometries that user upload that are not that optimized, and they got, for example, a huge precision for a new case they don't want to see. For example, they're uploading data with meters or centimeters precision of a country, for just showing the overview of the country, and wasting space in their accounts and resources and that. And so running spatial operations that are pretty heavy, because PostGIS has some very good things, but also some not really optimized things. And then rendering it by using the SQL that most of the times is not the ideal thing for the job. But we have no automatic way to tell that to the users, so we have to live with it. We live with it, but we try to pinpoint it that trying to learn how users are doing their stuff, but keeping close metrics on every server, data server, and every user. In this case, this is our internal dashboard we have. And not for the whole category platform, but for a subset of users. We're keeping track of the variances in response time, and tiny requests on SQL API responses. But the thing is that as we allow users to do anything they want, we can bound and be predictable and say, all of the queries will take less than 100 milliseconds, because sometimes it takes a lot more than that to just get the SQL results of the geometry they want to render. So we're actually closing the walls of this flexibility, because the problem is that it ends up hurting, because users share a database server and a PosterS instance. And PosterS has no built-in way to restrict the CPU usage, or the queries a user wants to run apart from setting a time out, so queries don't take long. But for example, five minutes. So we're actually working on that, too, with another layer on top of PosterS, which will allow not only limiting by statement time out, but also using Linux seed groups to actually give the processes of each database connection a different priority on the database. So we give them power, but we limit the CPU power they use so they don't hurt other customers. And to get a bit of a history of how we grow up the platform, I want to explain that at first, when we were very small and we were in the hundreds of users, all of the kind of review was just the bare metal servers, which all of the parts of the rendering things in one single machine. The database, the SQL API, which is an API you can use just to throw SQL queries to the database server. Our target service, which is OpenStars 2, which is WinSafCard2DB, which is built on top of MapNIC. And then our management UI, which is the interface you see, which is made on Ruby and Rails with a hefty lot of JavaScript client-side code. And what we did is just this. As we need to scale and put more users, we just started sharding the users. This is not the real distribution, but adding new servers to the thing and distributing servers uses between servers. But that sharding problem gets to be not so convenient when you have a single user, which is hammering the server a lot, and getting to the whole potential of the CPU of that server. And then we have a lot of idling resources on all of the bare-mannered servers all around. So then there was a point where we said, we have to scale. And we moved to Amazon EC2. And inside, we moved what was previously inside a single machine, which is distributed over machines. So we can now share parts of computing power among the Tyler, the SQL API, and the Rails side. And actually, we use them for different database servers. The only thing that is unique to EncryberaV to a user is the database server, which means its user is only in a cluster, be it replicated or not. And then the rest of the things I'll share. The Tyler's actually render tiles from all the machines, the SQL API connects to all the machines. We have some sort of prioritizing. So enterprise and paying users get more resources, get to do more queries, and get their traffic prioritized against the free users. But that is the way we're going for now. And once you remove CPU from the equation, it is no longer your bottleneck. But when you start moving out things, and you send things the way they're not supposed to be, you find a different kind of problems, which is the bottlenecks that aren't there until you find them. For example, there was a client which just severed data from a relatively small database, which were both tiles. For example, this one is just empty. It is actually the middle of a polygon. And this one is actually also, I think, a polygon. The data I think is there in the source was a rasterized polygon for some reason. And we noticed that when we moved from the dedicated instances and put everything in the same machine to put everything in distributed things, everything started to go, this client in particular started going much more and more slowly. And we couldn't figure out why. So we started looking at the data set, and we figured out that the polygon that those two tiles are render for is actually 25 megabytes of data, which is starting the database. And that POS GIS and MAPNIC, the polygon and the intersection cutting off POS GIS doesn't seem to be very efficient. So what MAPNIC does is actually download the whole polygon and then paint it. The problem is that when the tiles, you get that part of information from a big polygon. But also, it sounded like two match. So we started to look at the protocol that POS GIS uses to communicate with MAPNIC. And it was just announced to the internal protocol it uses, which is the WKB. And we found that all of the data between the POS server and the Tiler server is sent in WKB, which actually has an 8-byte float for each x coordinate. That 8 byte, we were basically transferring all of the points of the polygon 8 bits per byte, 8 bits per lighted or longitude, depending on the panel of the coordinate, over a single network link. So what happened? We just went to the monitoring tools. We looked at the traffic. And we found that the reason that everything was going wrong was because we were actually collapsing the network interfaces of the machine, transferring data, which we did the math. And the data turned out to be, and it turned out that for each tile that we're rendering, we're transferring 25 megabytes of data, which is all of the points of the polygon with an 8-byte precision, which is actually a precision of 2.17 per 10 to the power of minus 10 precision. And yeah, this was our phase one. I found that out. And because of the fact that this data was looked almost up the data, almost everything on the data we're transferring is looked at high tune levels, or relatively high. So even a meter distance would be enough. But 10 to the minus 10 centimeters on it like definitely too much. And we started doing some tweaks with using a hacked up version of both Pulse EIS and MAPNIC to actually use a format we call CDBWKV, which is a stupid format, which is just get the bytes and get the precision to a half. And that is like one centimeter precision that for high tune levels is even less than a pixel. And nobody will notice that. And we're experimenting to it. But instead of the solution, we're actually doing, we want to do it the clean way. And while we're working on the real solution, we just decide to scale the Pulse EIS part and then add some slaves so we get some traffic to deal with too many precision of data. And we get to the solution. For example, the one we're thinking is to just inside the Pulse EIS and the Tyler windshield, we're thinking of adding something that packs the data and then pushes it. Because most of the times, users are not changing SQL or the source of the data itself, but they're changing styles. And we don't need to get the whole data from that. And we want to get away with not hitting the database server as much as we can. And we're still working on that. And one of the approaches where we're investigating is just to use MAPNIC back for tiles at the output of the Pulse REST and then put a cache in the middle so we can just use the same data for both rendering tiles using MAPNIC to own windshield on the server and also sending the same MAPNIC REST of tiles to a client by using client rendering, which is something we're also working on. And this is just one of the lot of bunch of problems we're having with scaling. And they're pretty fun. So if you're interested, we're looking for some people. We're rowing. We just got some money. And if you're interested, we have a job page with positions we're looking, especially senior side reliability engineers and developers and all kinds of jobs with cool problems to work with. So that's all. Thank you. And if you've got any questions, now's the time. Thank you. One thing I do for my caching is I copy all the geometries and all the other data for each map into its own cache table. And then I send it to the tiles. So at least there's no subqueries. But I'm interested in your, like, wind shaft and how that is that faster than tile stash or how that compares and why you went with that. Because I'm right now using tile stash to make my tiles. Yeah, actually. Well, there's some on that right there, which might be more suitable to answer the question. But a wind shaft was done, I think, before tile stash. So they're pretty comparable. Wind shaft was developed in house by Cardi B. Even though it's open source to more or less fit the requirements we have. But basically, they're just the front end to map and the way it does things behind. So yeah, in that regard, it's more or less comparable. But tile stash has a lot of good things. And wind shaft has two. And wind shaft is just filling our needs. And we're actually thinking on investigating how it works, because it might be actually better than ours. And about the caching, the problem with this is that you get to allow the users to change dynamically the SQL queries. So it's a bit difficult to actually cache the data output itself, because first, there's a bunch of data. And then second, we don't know how often will the user revisit it, because imagine he's just editing styles in the Cardi B user. And we get to an SQL query. He's trying things. And we cache it to how? And that will be quite inefficient. So we're working on doing some more caching on demand, caching the data that the users actually request. So we're not inside the tile, inside a bounding box. Not the whole data, because if not, we'll have to cache maybe even bounding boxes or places where there's no data at all. One of your last diagrams showed PostgreSQL something MVT and a cache. Is that a caching layer for PostgreSQL? Where? That one there. I was wondering what exactly the MVT was, and if that indicates some kind of caching. Yeah, MVT is an intermediate format, which is actually being created by Mapbox, which basically means Mapnic vector tiles, which is just a way to get the information of a tile inside a pro buffer in a very efficient way, and to get all the information related to a geometry. So this we haven't developed yet, but basically, we wanted to generate those tiles, because with those tiles, they're unstyled, just the geometries. We can just cache the raw geometries and then paint it differently on the tile, or even client side. Anything else? OK, so we're done. Thank you very much. Oh, this, there's another question, huh? The part there where it says the cache above the MVT there, is that referring to a standard varnish style cache or memcache? We're thinking of going the varnish style cache, but we'll do it more like that. Thank you. Thank you.
|
At CartoDB is an open source stack that includes PostgreSQL, PostGIS, Mapnik and Leaflet. The hosted version enables thousands of users to make new and interesting maps everyday. With some of those users including Al Jazeera America, Twitter, and even online gaming platforms, we aren't scaling for one popular webpage but for thousands of different ones each day. On top of that, maps aren't constrained to a single filter, single style, or to a predefined zoom, CartoDB allows users to access the full power of a dynamic database from the front end. In this talk, I'll present the architecture decisions we have implemented that make it possible to turn PostgreSQL and PostGIS into components of a powerful real-time data visualization tool. These decisions cut straight through the CartoDB software stack, from PostgreSQL and PostGIS through our caching and tile services, and up through to our CartoDB.js library. We'll talk about our on-demand tiling service, our caching strategy, and our implementation of the novel data format for Torque. Each of these areas has enabled our users to make user of entirely open source tools to create maps and services that scale, remain fast, and are beautiful.
|
10.5446/31753 (DOI)
|
Al is a news application developer at ProPublica, a non-profit investigative news outlet based in New York City. He is EGLE Parts' designer, developer, and reporter, and has covered campaign finance, schools, disaster recovery, and other topics. Before joining ProPublica, he was a designer and developer at Talking Points Memo and creator of TPM's popular poll tracker application. He was honored with the Society Professional Journalist's Sigma Delta Chai Award for his map surrounding FEMA's response to Hurricane Sandy. He's also earned awards from the Society for News Design, the Online News Association, and investigative reporters and editors. I think that's an of. I don't know. You know them better than he knows his awards. Most recently, if you haven't seen it, and I'm sure we'll hear more about it today, ProPublica and Al's work called Losing Ground came out just a couple of weeks ago about land loss in the bios of Louisiana is just absolutely great work. I haven't had as much time to dig into it for obvious reasons, but I fully intend to over the next couple of weeks because it's deep and rich and wonderful and is exactly the sort of product that's enabled by the tools we build here. For that, you all get a big round of applause, but don't give it to yourselves. Give it to Al. I'll get off stage. You're done with me. You're up to Al. Thanks, Darryl, and thanks to everyone. This has been an absolutely fantastic event, totally blown away. We don't get this kind of hot palate journalism competence, so it's pretty awesome. I am not going to sing or dance, or I don't have William Shatner here, so you're just stuck with me. One thing before I start, this URL appears in the slides. There's a lot of URLs throughout the presentation, and I won't have enough time really to get through all of them, so you might want to bookmark that or check it out later. My name is Al. I work at a place called ProPublica, which is a nonprofit investigative news outlet based in New York City. We're pretty small. We're about 50 people. We usually do long projects, and one thing that's unique about us is that we have a mission. We do stories with moral force. We do stories that channel the disenfranchised. We do stories that represent the underrepresented. I work on a small team within that group called the news applications team. We are reporters just like the rest of the reporters in the group, but we also make graphics and what we call news apps. We also write stories. We pitch stuff. We do the same journalism that all the other journalists and reporters on staff do, but we channel them into a few different other places besides just text. You may know graphics, stuff like what the New York Times does is fantastic, and that's what we like to call graphics. News apps are a little more complicated. News apps are what we like to call telling a story with software instead of words and pictures. So, a traditional story, you'll read through it, and at the top you might get a little bit about the story, and you'll read through it, and you might get to an anecdote about a person, and then you'll read about them, and that might be indicative of a larger trend or something. But news apps actually let you find yourself in that trend. They let you put yourself inside the story, so that's what really excites us about news apps. Just for example, this is an app I wrote a couple years ago called the Opportunity Gap, and it's about education inequality in America and about how various states are different at providing equal access to stuff like advanced placement classes, advanced math, sports, stuff like that. So, you could actually sort the states by programs and see how well each state does. And so, that initial page, what we like to call the far view, that's if you're reading a story, a traditional news story, that's the top, the lead. These are kind of traditional journalism jargon-y words. The lead just pulls you in at the beginning, and the nut graph tells you the gist of the story. And it used to be that before we had computers, people would write stories, and editors would just chop them off when they ran out of space in the paper. And that's usually later on, the anecdotes might get cut out, stuff later on might get cut out. But in news apps, we have the far, the lead, and the nut. And then the most important thing is this thing called the near, which is where you can find yourself in a story. And in this app, you can actually look up your school, or look up your address, and see information about it, and see how well educational opportunity is provided at those schools. So for example, in Williamsburg, Brooklyn, it's kind of interesting because that's a pretty wealthy area these days. And 99% of kids are on reduced-price lunch, and that's our proxy for poverty here, a reduced-price lunch. And the enrollment rate in AP classes is staggering. So if I was a local news organization, I might want to write a story about that. And a lot of news organizations have. One of the things we like to do is put our data out there and let local news organizations write stories. Likewise, another unique thing about us is that all of our stories are republishable under a Creative Commons license. So we're almost doing what you might say is open-source journalism. Oh, yeah, we also do a lot of maps. So in this same opportunity gap application, you can see these two views of Southern California. And those are some maps we do. And one interesting thing about them is if you sort by reduced-price lunch versus AP classes, you'll see two very different maps. You'll almost see two inverse maps, which highlights the lack of equal educational opportunity in Southern California. So we do tons of maps at ProPublica, a lot of these state maps, some other. We do tons of different kinds of maps. And we also like to open-source our code. So what we do is when we write something that's abstractible, we like to open-source it or turn it into a tool and then release it out to the greater public. So in this instance, this is a simple SVG map tool that takes GeoJSON and turns it into an SVG map for your site. We also have this thing called Stateface, which is an open-source font. And it's every state plan projection in the country as a font. And lots of different news organizations have used this from the New York Times to the Guardian to the Washington Post. Just tons of news organizations have used it. So it's gotten a lot of, it's super popular. But today I don't really want to talk too much about tools. I want to talk more about stories, because that's what we do. We do stories. And I want to tell you three stories today that came out of geography. And they're three places in time, and three places in the US, and three points in time. And these stories came out of geography, and the presentation also was informed by geography. So I'd like to start in California in 2010. So we're all census nerds here. And we know that every year the census does, releases a whole bunch of these files called TigerLine files. We're all really into those. And they do, they release those every year, because every year the census does a survey, and they find where people have moved, what demographic changes have happened. And they release all these files. But the other thing that happens every year is this process called redistricting. And redistricting is when they see where people moved, and see where demographic changes have happened, and they actually redraw the lines of congressional districts to match those. And what also happens in the course of redistricting is the nefarious side of that, which is where people who have an interest in keeping lines a certain way like to draw those lines to match their interests. And that's what we like to call gerrymandering. So we got all these new files from the census in 2010. And we wanted to figure out, is there a way for us to detect gerrymandering? Is there some kind of algorithm? Is there some kind of beautiful machine that we can build to detect gerrymandering? And so we kind of looked around, we called a lot of people, we talked to some people, we wrote some code. And what happened was we landed on Northern California, but not for the reason that you might think. You see a lot of stuff in the media, stuff like this, that tries to predict gerrymandering. There's this weird post on Vox where this guy tried to draw diamonds to make equal districts. You see stuff like St. Colbert, these crazy districts that are elongated and in all different shapes. The mapping company, Xavier, even created the gerrymandering index where they looked for compactness, basically how close to a circle a district is. And with all respect to Xavier, they had some caveats, which I'm going to get to in a second. But there's another one called Polesby Popper, which looks for the perimeter, length of the perimeter of the district to try to predict gerrymandering. But when you look at all of these algorithmic techniques, they all end up failing because you get to districts like New York or Baltimore. And New York, for example, in Queens, there's the Rockaway Peninsula, which if you've ever been there, you know is a barrier island and it's long and skinny. And that's just the way it is. And there's a community that lives there, a community of interest. And the same thing with Baltimore. Baltimore is right on the coast and it's right on a harbor. And the rest of the city is kind of a parallelogram. So you can't really say that this is a gerrymandered district just because it's long and skinny. That's what Baltimore looks like in context. But then someone might say to you, all right, well, what about landlock districts? What about places like Illinois where you have this crazy district? This has to be gerrymandered. Look at how narrow that is in some places. So if you look at this awesome graphic that New York Times did in 2010 where they put together a graphic showing ethnicities of where people live according to the 2010 census, you'll see that that district matches a pretty good, pretty well, the area that Latinos live. And this is actually a voting rights act district. So this has to be the shape this way. So no, we can't actually predict gerrymandering through an algorithm. And that's something that we came to. And so we knew that gerrymandering was happening. So we decided to kind of do some reporting and see if we could figure out. We stepped away from our computers for a second and put on our reporter hats and actually flew out to California and tried to figure this out. And when we saw what was happening, we read through some of the districts that had been created. We saw that there was a lot of funny stuff happening. So for example, this district, District 48 is one of the things that binds this district to the other is a common love of intense beach recreation. There's another district farther up. I think this was a few years ago or a few decades ago, farther up the coast. It was united. It was a long skinny district. It was united by the common ground of an endangered condor. So people will use all kinds of things to justify a district. And this led us to looking into this guy named Jerry McNerney, who is a representative in Northern California, and we did some reporting. And we found out that according to an internal memo, the Democrats recognized that early on that they could basically control all of Northern California as long as they didn't, as long as no district crossed the Golden Gate Bridge and as long as they could pull off some Democrats from this district, which is District 11 in the top left, into pull that triangle into a unified district which encompassed all of San Joaquin. So it's a little hard to see there, but basically that little triangle in the bottom right is mostly Democratic and the rest of District 9 there is Republican. And the way it works in California is there's an independent redistricting commission that's supposed to be nonpartisan, is supposed to listen to people testifying and then make the recommendations for districts. What we found was that the people that were running and the people that had the vested interest in staying in power, stayed in power mostly by hiring redistricting consultants and working with certain puppet groups which testified on their be-haves. So this is Jerry McNerney's new district and you can see that there's that pocket of Democrats in the corner which offsets the Republicans in the rest of the district. And the Democrat strategy here was actually really brilliant. What they did was they set up this group called One San Joaquin which on the surface looked to be actually a group that favored Republicans. All their rhetoric was about unifying this area and creating one unified San Joaquin, but in reality it benefited the Democrats and you can see from this FEC report in the top left there that Jerry McNerney paid redistricting partners which is that consulting firm to draw these maps and those are the maps that essentially became the maps that we see in the census today and the districts that are in effect today. This also happened in Southern California. There was a representative named Judy Chu who represented an area called Rosemeade as well as a lot of other cities in Los Angeles, but her main constituency was in Rosemeade and there was also some other cities that were nearby, but she had a vested interest in keeping Rosemeade within her district. And if you look at the initial district before redistricting and the district afterwards, Rosemeade stayed within her district yet all of these other and Rosemeade is primarily Asian, American and all these other districts which used to be contained within the same district got split apart and these districts are all primarily Latino. North El Monte stayed in her district, but South El Monte, El Monte and East LA all went into separate districts. Now this wasn't an accident. This was a concerted effort on her part and on puppet groups that were set up and you can read more about this in our story, the URLs back here at the bottom and I encourage you to read it because it's really fascinating. And if you look at the maps that the consultants drew for her and the final maps, there's not a whole lot of difference. So this is the maps that the consultants drew for her and this is the final map that was put into place. So gerrymandering is real. Redistricting is not a totally impartial nonpartisan process, but there's a whole lot of different ways that redistricting happens, sorry, gerrymandering happens. One is a process called cracking. Now cracking is where a bunch of districts, sorry, where one community of interest is split up into many different districts. So the city of Austin and the city of Rochester, I'm sorry, the city of Austin and the city of Rochester here are all cracked in the sense that they're split up into multiple districts. Cracking is another example of a strategy to gerrymander, but this is kind of the opposite of packing. This is where communities are all put into one district. So these cities in the South, in Florida and Alabama, are all packed into one district and these are primarily African American districts, African American cities that are packed into one district. Now you can even get a little more specific. So hijacking is when you actually look for a certain house and draw someone out of their own house, someone that's running for office and kidnapping is the opposite when you draw someone into another district that they don't want to be in. But what about these maps here? So we did a lot of cool maps for this project and I want to talk a little bit about the actual maps that we put on the web because what we do is not just on the reporting side, we also care a lot about the presentation side. And in 2011, the state of online mapping in news was not that amazing and you guys are all probably more not in this world so you don't know how primitive we are in the news industry. But in 2011, we were mostly using Google Maps to put maps on the web and other tools like that. At this point, I think TileMill was still pretty much in its infancy but you could correct me on that. There was some other tools but most of the time we were using Google Maps and Google Maps is primarily a wayfinding instrument. It's primarily, it's primary use is to help you find certain points and they make decisions about what points of interest go on those maps and what's important and what's not and they make it really, really easy for you to add as many points as you want to those maps. And this one map, I don't know if you saw this newspaper in the Hudson Valley in 2011, put up a map of gun permit owners, of who had gun permits where in the Hudson Valley. And this caused a huge uproar, as you may imagine, and the map was pulled and I think that they even passed a law locally saying that this data wouldn't even be available anymore. So I think this actually did a lot of harm and I would argue that one of the reasons this map happened was because of this tool, the tool that made it easy for you to put all of the data onto a map and slap it on the web and go with it, go with your story. So we did a lot of stuff with Google Maps. We tried to make it work for news. We tried to limit the features, highlight the features that we wanted but Google was still making a lot of decisions about what is important on these maps, these freeways, these parks, this water, stuff like that. So what we were trying to do was make maps that you traditionally see in a print newspaper. This map, I really like this map a lot, it's a map of New Orleans after Katrina and it shows you the area that flooded and it shows you just the points of interest you care about, the hospitals, the medical centers, which medical centers are open, which are closed, the extent of the flooding is clearly labeled in the same color as the flooding itself. And you don't get bogged down with stuff like freeways and stuff that doesn't pertain directly to the story. Amanda Cox at the New York Times has a famous quote where she says, it should never be here's some data, see what you can find in it. If we're doing that, we're failing in news, we like to tell people exactly what's important and show them what's important in the news. And so what we wanted to do was create a system that allowed you to add just the layers you wanted to add and have a map that showed readers exactly what we wanted to show them. So they would get the story immediately. And so for the redistricting maps, we wanted just a very clean natural earth based layer. We wanted, of course, the census layer, we wanted to the census layer of the districts, the lines of the districts, the demographics and ethnic trends behind the districts. We wanted a rich annotation layer to let people know what they were looking at in these maps. And maybe most importantly, we wanted the limited amount of navigation. We didn't want the Google map style, you know, slipped at any place on the earth that you could. We wanted to limit you to looking at our story. And that led us down to through somewhat a security path, but we ended up writing our own tile generated platform. And we wrote it in C and this is what we call simple tiles. And we also have this thing called simpler tiles, which is the Ruby bindings to C, which is important because we write our apps in Rails and we like to serve maps directly from our news apps. And the API is super simple. And actually the API, I think, reflects exactly what we're trying to do in news. We're trying to add just the layers we want. We're trying to filter them exactly the way we want, adding the styles we want. And we're using Cairo here to add, to create the context for the pings, for the pings that we split out. So there's a very simple styling language in here. And it's even simpler in Ruby. In Ruby, you basically just point to a shape file. That could also be a post-GIS database. Add a certain filter and then add your styles and spit out a ping. And this goes really, really well inside, say, your Rails controllers to spit out tiles on the fly when someone asks for something. So it's really, really simple. It's basically just a set of structs that all work with each other. My colleague Jeff Larson wrote this. And this is his brilliant vision in keeping simple tiles super simple, hence the name. And all you do here is you set some bounds. You set a projection. You add some ogre data to it. You filter that. And you style the Cairo context using this super simple Cairo styling language. And then you've got a ping. And even just this year, we added raster support to it, which is even simpler. And all you do is just throw anything GDAL can read at it, any kind of a geotiff or any of the many numbers of formats that GDAL can handle. You re-project it, throw it on Cairo context, and there you go. We've got a demo running at this URL here of New York City Landsat. This is a true color image that I kind of hastily pan sharpened RGB image and threw into simple tiles. And this entire demo took about an hour, including the pan sharpening and color tweaking. And it's about eight lines of code. So we're using a little tool called Sinatra, which is like a very simple web framework for Ruby. And simpler tiles, and you basically just throw in the path to a raster, a geotiff, and you're off. So it helps us a lot because we like to write what we call deadline software, because something comes up and we need to get a new app or a graphic out fast, and we don't want to write a whole lot of code. So simple tiles and simpler tiles has helped us a lot with that. So these are the kinds of maps that simple tiles has helped us to create and get us closer to these kind of print maps that we admired so much before we made them. And of course, simple tiles also works on a Google Maps base. They work anywhere. It's just slippy map tiles, and you could see them working in our opportunity gap application as well right here. So that's story one. Story two is about New York City. In New York City in 2012, as you all probably know, Superstorm Sandy hit New York as well as most of the Eastern Seaboard. And it kind of paralyzed New York City for a little while. We were our offices in the financial district, and definitely within a flood zone, Manhattan lost power for a week. The subways were down. This was not a small storm for New York City. And we once kind of, we were back in our office and back in our chairs. We decided to kind of look into this, look into the recovery, the response, how this was all handled. New York hadn't seen a storm in a while. Obviously after Katrina, stuff didn't really work that well, and the response wasn't handled too well then. So we kind of had a feeling that there'd be some stories in there. And a few months after the storm, New York City released this thing called their New York City Resilience Report, I believe. I forget exactly what it's called. But we were really interested by this one map in the report. And this map shows where Sandy flooded and the floodplain maps that were in effect when Sandy flooded. So two things jumped out at us immediately from this map. One was, wow, these flood zones didn't really do too well. They didn't really predict where Sandy would flood that well at all. And the second thing was this date here, 1983. That was the last time the flood maps had been updated in New York City. So that kind of confused us a little bit. Why had the flood maps in the largest city in the country not been updated since 1983? So we called some people, we did some reporting. We talked to some of the people doing GIS in the city and tried to figure this out. And we got even more confused when we went back because people told us, oh, you should probably look at the flood insurance study. And we got even more confused because the flood insurance study said it was revised in 2007. So why was the flood insurance study revised in 2007? But this other report says it's 1983. This is all confusing to us. What went wrong and what changed when? So we actually opened up the report and started reading it. And this passage kind of jumped out at us. And what's super interesting about this is that the coastal flooding, the coastal storm surge analysis and elevation data was super old, especially the coastal storm surge analysis was from 1983. And so we were just kind of confused. So why would they update the maps and not update this underlying analysis? And a lot of it has to do with money. But we started digging back a little bit more. And there was a whole slew of GAO reports. The Government Accountability Office has a whole slew of reports about FIMA. And that led us back to the early 2000s where we looked into FIMA's map modernization plan. And essentially their plan at first was to digitize all of the maps in the country. Let's just get the maps all into a digital format. That's then we'll go from there. Let's not care too much about what these maps are actually saying. Let's just make sure we've got shape files on the server and then we'll handle the next task. Well, in 2004, a GAO report came out saying that that was probably not such a great idea because all of these maps were in different standards. They didn't adhere to a common standard. There didn't seem to be any kind of rhyme or reason to how the maps were chosen to get additional data or even which ones were digitized first. But the biggest problem was that most of them did not update their underlying data. They simply took the same maps and put them into digital format. And so in 2006, FIMA issued what they call the mid-course adjustment when they realized this strategy is not working too well. We should actually change our percentage and maybe digitize a little bit fewer, but update more of them with underlying data changes. So people will actually see a map that was updated in 2000, see a map that was updated at a certain time and actually trust the data behind it. Well as far back as 2005, the New York State flood chief actually sent a letter to FIMA with great alarm about the state of the maps in New York State at the time. He said that the maps were insufficient in poor quality but really good looking maps. And they failed to provide the data needed to adequately manage development in the flood plain. So that's as far back as 2005. So we kind of looked around and we wondered a little bit how other counties around New York City fared. And we looked to Nassau County and Nassau County, which is in the eastern part of Long Island. So basically right next to New York City but to the east, Nassau County's maps were updated in 2009. And we were a little smart this time. We actually wanted to see what they did in these maps. So we looked inside the book and we found that they, lo and behold, they did a new coastal storm surge analysis in 2008. So that got us really curious. So if Nassau County did a new storm surge analysis in 2008 and the one from New York City was from 1983, how well did they fared together? Well, we made that map. And in Nassau County, 89% of the flood area was predicted by the maps where Queens and Kings, which are Brooklyn and Queens respectively, did not do as well. And we did this by whipping out some ogre, some C++ and playing around with our ogre. And just looking for the intersections to see how well these maps performed and how they varied. So we did this for all of the maps in the area and kind of created this little metacritic for flood maps thing. Just to look at how all the counties in the New York, New Jersey area fared. And the box size here actually is the population of the county. And the color represents how well broadly the county predicted the sandy flooding. And there's not much of a pattern between the county population and which maps did better or worse. Some of these maps in New Jersey actually were not even digitized yet. Those are the ones in gray that are still in paper. The Atlantic City's maps are still in paper, which that's kind of a funny thing to think about. So you could kind of go through this metacritic for flood maps and see the Rockaway Peninsula as we were talking about a little earlier. They didn't do all too well in that one. And that's a barrier island we're talking about. There's not a whole lot of elevation there. So we wanted to find some people in this story because we like to write words too, not just software. And for the words part of this project, we wanted to find some people that would illustrate the story. And we thought we could find these people with our computers. It's kind of a funny thing for a reporter to think to do. But we thought we could find some people that would represent this story by looking for people that bought houses after 2007 and are not in the 2007 flood zone, but are within these preliminary flood zones. So FEMA actually released some preliminary flood zones right after Sandy hit that were in the works for a while and actually tracked pretty well to Sandy. I don't have that map of right here, but everything in the brown area here, what's in those preliminary flood zones. So the flood zones that FEMA made with the new coastal storm surge analysis with LIDAR based elevation. And so all of these purple houses in here, purple buildings, are buildings that were damaged in Sandy. So this was kind of what we wanted to do. And the buildings that were within the preliminary flood zone are not in the 2007 flood zone, were built and altered after 2007. Actually that should also include and were damaged in Sandy. So we whipped out our ogre a little more and kind of looked for those buildings. And New York City has a great data set called the property address file that lets you find out when every building in New York was built or altered. So we looked for those. And that led us to this family who lives in Sheepshead Bay. This is the Morgan family. And they bought their house in 2008. And they actually looked at the flood map. And this was one of the first houses we came across. We actually, I actually printed out this map. This is just a QGIS printout. I printed out this map and handed it to a reporter. And she went out and knocked on doors of these purple houses. And so she found one of the first houses she knocked, whose door she knocked on was the Morgan family. And they thought they were fine. They had looked at these 2007 maps. They even did a few precautionary measures. And their house did not do so well in Sandy. Their whole basement was flooded. They got $17,000 from FEMA and $6,000 from their homeowner's insurance. But they spent nearly $50,000 out of pocket to rebuild their home. They had to knock out their whole basement. So this is kind of one example of the casualty of these old maps. And we did a couple other little mini maps for Within the Story. These are kind of just embedded within the story to look at other neighborhoods that were in the same situation. So the yellow areas here are areas that are in these preliminary flood zones that FEMA released right after Sandy. And the blue here is the existing 2007 flood hazard zones that did not do so well. And all those black buildings are the buildings that were damaged during the storm. But we wanted to actually take this a step further. We were really inspired by maps like this one that the New Orleans Times Piki and did that actually shows you really, really well how much of a bowl New Orleans is and how much of the city is actually underwater and only protected by these levees. So we wanted to make a map just like this, kind of like this for New York. So we decided to make a 3D map. And we did use WebGL for this and some custom code that we had written. And this is what the East Village in New York City would look like if those preliminary maps, which are FEMA's new perfect storm maps essentially, this is what would happen if that storm hit. And that's exactly where the buildings, or sorry, where the flood zones would come up, where the water would come up to on those buildings. We did one for our neighborhood. This is where the ProPublica building is, Lower Manhattan. It's kind of funny. You can see the battery tunnel inlet right there. And I won't talk too much about this, but we wrote a lot of code and we actually invented our own file format called Jeff Files for Jeff Larson. So maybe you can get a file format named after yourself if you write some WebGL. But there's a lot more about this in this blog post. So I encourage you all to read it. So that brings me to Story 3, which is going to start kind of on a selfish note. In 2012, my girlfriend actually moved down to New Orleans. She only stayed there about a year and a half, but I made a lot of trips down there. And when you go to New Orleans, even if you're there for just a little bit, the place kind of thinks its teeth into you. The music, the culture, the food, everything about it is just so unique, so different than the rest of America. It's just an incredibly unique place. But what might be most unique about it is the geography. And when you go there and kind of hang out there, you just can't imagine that people live here, let alone a city that used to be 600,000 before Katrina, now about 300,000, in this basically dangling off in this wetland that's rapidly sinking. So I started reading a lot of books and I recommend you guys all read these because these are fantastic books if you like environmental journalism. And especially this book, Buy You Farewell, by a guy named Mike Tidwell. And this book actually started out as kind of a travelogue. He went down, he wanted to write a little book about the Cajun coast, and when he got there, he realized that there's this huge phenomenon going on where people's land was essentially disappearing right underneath their feet. And it turned from kind of a lighthearted travelogue into almost an elegy for people that kept having to move farther up the bayou because they could see their previous land where they had been living disappearing over time. So I'd read this and you don't have to spend too much time in New Orleans before seeing a map that looks like this, which is, you know, if you live down there, a very, very terrifying thing. This is from the USGS. It's a map of all land that has disappeared between 1932 and will, it's between 1932 and will disappear by 2060. So I apologize in advance for the font used on this map. I did not create it. But it's a very terrifying map. And I brought this map back to my boss and showed it to him and he's like, yes, we must do a story on this. This is crazy. No one outside Louisiana knows this is happening, but a third of a state is basically sinking into the ocean, into the Gulf. So why is this happening? And this is something that I was very curious about and started reading a lot about. And I think one of the best passages that illustrates why this is happening is from a guy named John McPhee who wrote a piece in a 1987 New Yorker called The Chaffa Lie. And I just want to read this to you because this passage really, really spoke to me. And these certain, these emphasized words are my emphasis that I just emphasized them because I thought they were so great. And he says, under nature's scenario, with many distributaries spreading the floodwaters left and right across the big deltaic plain, visually the whole region would be covered with fresh sediments as well as water. And an average year, some 200 million tons of sediment are in transport in the river. This is where the four land Rockies go, the Western Appalachians. Southern Louisiana is a very large lump of mountain butter, eight miles thick where it rests upon the continental shelf, half that under New Orleans and a mile and a third at Old River. It is the nature of unconsolidated sediments to compact, condense, and crustally sink so that the whole deltaic plain, a super Himalaya upside down, is to varying extent subsiding or it has been for thousands of years. I recommend that you all read this entire essay. It's full of metaphors like that and it's amazing. So here's, this is a great graphic from the New York Times in around 2006 where they kind of just illustrate what he was talking about here and how the Mississippi River for the last 7,000 years has shifted course. It started out, I guess, farther east to where the Echophila River is and it's moved over and that's how this boot shape actually got created. The boot that we know of is Louisiana. So leading up to, so in 1927 there was a horrific flood. I'm sure you all know about this because it flooded almost every state up and down the Mississippi River. And leading up to this flood there was a big debate in the scientific community, in the engineering community about how to actually keep the river locked into place. And one proponent in this debate was saying, let's just build levees higher and higher and higher and one other proponent in the debate was saying, no, we need to have spillways now and then to alleviate the pressure because the levees will eventually overtop. So that's exactly what happened in 1927. And here's a closer view of the delta around New Orleans. And the only reason New Orleans was spared in the 1927 flood was because about 60 miles southwest of the city, they cut the levee and flooded almost all these towns in the delta southeast of New Orleans. So after this flood, the Army Corps and a bunch of other people realized they needed to do something about this. And so that's when they actually started building the levee system that we know of today. And it incorporated both levees and spillways because they clearly after that they realized that spillways were a pretty good idea if they needed to alleviate pressure on the river. And what the effect of this was, they leveed the river all the way down to the very edge of the delta, what's called the birdsfoot delta in the very bottom right-hand corner of this picture. And all this sediment, this mountain butter that John McPhew is talking about would come all the way down the river and it wasn't repopulating the delta. It would actually go all the way off the continental shelf and it would be wasted forever. And this is a great photo that illustrates that. And up in the top, this is the Bonnet-Carré spillway, which is a little, when the river runs high, the sediment actually gets filled into Lake Pontchartrain so it doesn't come and flood New Orleans. But all of the sediment here is either going and do one of these spillways or it's going all the way down off the continental shelf where it's being lost forever. And all of these areas, these wetlands which need sediment to be replenished are not getting it. And this is one of the reasons, the biggest reason that the coast is vanishing, the coast is sinking. So if that was all that they had done, we would not be in the situation we are in today. We would still probably be losing some land but not at the rate that we're seeing it. Because the other thing that happened in the early part of the century was that they discovered oil in South Louisiana. And these are three of the main oil pipelines because they needed to build these pipelines so that people in the Northeast could turn on their gas ranges in their kitchens and get natural gas to do their cooking. And that gas came all the way down from the Gulf. And this doesn't make it look too bad because this is just three big pipelines. But actually there's thousands of these, thousands of pipelines all throughout the coast. And the pipelines are not the only problem because they actually had to, in order to lay the pipelines, they had to get to the areas they wanted to lay the pipelines. And in order to do that, they had to dig canals. And up until fairly recently, the way they dug these canals was not in a way that was friendly to the wetlands in the sense that these canals were way, way larger than the pipelines that they needed to drill. You could see in the bottom image here in the pipelines they needed to lay. And what they did was they dug the canals and the spoil banks, which is what all the stuff that was inside the canals got put over to the edges. And they did not fill those in afterwards. And so what happened was the spoil banks would weigh down the wetlands and cause everything around it to sink and saltwater would intrude and cause the vegetation to die. Later on, they got a little smarter about it. And this is one of the later pipelines that they laid, I think either in the 70s or 80s. And in these, you could see the ditches they dug for the pipelines were only as wide as the pipelines themselves. And they covered up the spoil banks back onto where the original cut was dug. And you can see the effect of this. So these are all these former canals that got wider and wider over time. And saltwater infusion came in. And these are the spoil banks we're looking at that are just kind of perched above the water. And what were canals turned into lakes turned into open water. And this is not just an isolated thing. These canals, there's, I think, something like 10,000 miles of these canals throughout the coast. This is one Louisiana State University dissertation that looks at the canals just in this one section between the Mississippi River and Bayou LeFouche. But this density of canals is actually pretty consistent throughout the entire coast. This was just this one student's area of interest. And what's important, besides obviously the wildlife and all the estuaries and the fish that live in the wetlands, if you live in the city of New Orleans or any of the metropolitan areas around there, these wetlands are the first line of defense for you when a hurricane comes. So when a hurricane comes over the ocean, it slows down when it hits these wetlands. So it doesn't put as much pressure on the levees. If these wetlands weren't here, these levees would essentially become seawalls. And that wouldn't work too well because the levees are made out of mud. And seawalls, like the kind you have in the Netherlands, have to be made out of concrete, which would cost billions and billions and billions of dollars. So these wetlands are super, super important. They didn't realize that in the early part of the century when landowners would sell the wetlands to oil companies thinking they were just where alligators and bugs lived and stuff. But they know that now. And one levee district in New Orleans is actually suing 97 oil and gas companies for the coastal degradation that they've done to the wetlands over 50 years. This is just another view that the Times-Pikyoon did of the other side of the river and the density of the canals over there. So this has been kind of winding its way throughout the courts. The governor has done many things to actually try to kill this lawsuit, and they're trying to ask a federal judge to step in. It's kind of every day there's a new development here. Individual parishes, which are what counties are called in Louisiana, are also getting into the act and suing these gas companies because of this coastal damage. And you can see this, if you look at USGS aerials over time, if you look at this area, this is one of my favorite areas to look at, which we call the wagon wheel, which is a salt dome near the birdsfoot delta. And they discovered this salt dome and started dredging canals all the way around it and putting wells in. So over time, you can see how this activity leads to the land disappearing and sinking. There's this USGS image from the 50s and the 70s and 2013. If you look at this map, in 2013, NOAA, National Oceanic and Atmospheric Administration, delisted 31 places from USGS maps that used to be there that don't exist anymore because now they're open water. Places like Bay Palm Door and all these bays that used to be there, bays and bayous and channels and ponds that are not there anymore and are now taken off maps. So we decided to do a project around this, and we thought that the best way to do this was to use satellite data. We didn't know a whole lot about satellite data. You have to remember that we're actually in the news industry and we're kind of behind on the technological stuff. But we tried to get up to speed pretty quickly. And we thought that Landsat would be the best tool because of all the different bands and because of the resolution, the trade-offs between the different, the spectral resolution, and temporal resolution was a pretty good trade-off here. And so we decided to play around and see if we could make a pretty good map to show what the delta looked like in 2014 so we could lay on these older maps on top of them. And so this is our first attempt. We used just kind of a straight, true-color RGB image here. And this is kind of a good first attempt, but we were not too happy with it because you couldn't really tell the difference between healthy land, dying land, and sediment, and river. It all kind of met in this kind of goopy, gunky rat's nest thing in the corner. This is kind of what you see in Google Earth and why we didn't want to go with Google Earth. We thought we could do a little better. So the next thing we did, by the way, we also pan-sharpened this, and so the next thing we tried was we tried going with a couple different infrared bands and then a green band. And we liked this a lot better because you could see the difference between the land and water a lot better here, but we weren't too happy with kind of the uncanny valley effects that it, when laypeople and readers would look at this map, they wouldn't, we thought they wouldn't actually get a sense of the coast. They would think that this looked a little too alien. So this was our second attempt. We also pan-sharpened this one, and people told us that pan-sharpening false color images was not too good of an idea, and I think I kind of agree after looking at this. So we kind of settled. We went back to the true color approach and decided to put a mask, an infrared mask on it, and this side became kind of like our God combination here. We thought this looked really, really good. You could see the difference between the healthy land, the dying land, the wetlands, the water, the river, the metropolitan areas. People would see this and say, okay, this is what I think Louisiana looks like. And big props to Brian Jacobs, who's in the crowd somewhere. He did a lot of the legwork on this. This is kind of his baby. So it's a really, really amazing map. I just love this map. But we're standing on the shoulders of giants here. We took a lot from people like Charlie Lloyd and Rob Simon, who helped us understand how to play around with lands at that data. We also hung around the Tau Center at Columbia for a couple of days, and they showed us some stuff, and that was really great. So in order to create this visualization, we needed to go back in time and see and find a map of what the state used to look like before the leveeing, before the drilling and dredging and all of that. And so we got in touch with someone at LSU, a map librarian at Louisiana State University, who gave us one of the earliest USGS maps of Louisiana. And we were super excited about this. And here's what that map looks like after we geo-rectified it, geo-referenced it into the same projection that our newer map, our 2014 Landsat map, was put in. And we overlaid it, and we thought this is just a great contrast. People are going to love this. And we created these iso-lines to let you see what the coastline used to look like compared to the coastline now based on this 1922 map. And if you look at, say, Barataria Bay, which is that kind of bay in the middle there, just the size of that bay then versus now is kind of staggering. So that was great for the kind of before and after. But we wanted to actually create the land change between 1922 and 2014. We wanted to essentially create land. And we wanted an effect that looked like this kind of. And in order to do that, we used this excellent USGS study done by a guy named John Bara and a few other people, which is just fantastic. He kind of calls this the Mardi Gras map because it's like Mardi Gras colors. And each one of these colors is a different range of land gain and land loss. So certain times land was gained and certain times land was lost. And we used this to create these layers. And we didn't want to show this all on one map. We thought that readers wouldn't understand it when they saw all the gain and loss at the same time. They'd intuitively just see a big mass of land. So we found out what all those colors were. Kind of put them in a big array there and wrote our little, this is our new little task here, which was taking all the layers and for each time period creating an image that combines the land loss from that period, from the current period. So say you're in 1932 to the last period and land gain from the current period to the first period to the current period in order to create a little snapshot of what the land looked like at any given point in time. And so that's exactly what we did. And we also wrote a little code to make some graphs just by counting transparent pixels, transparent pixels there, to show you how the land changed over time. And so from then we decided to, you know, we knew we wanted to put this all into an app, a news application, as I was saying before. And we were inspired in doing that by a couple of different sources. One was this amazing map of, national geographic map of the Gulf of Mexico, which shows just all of the oil leases and oil infrastructure and pipelines and wells and it's just a wealth of information. If you could find this map, I recommend you try to track it down, look at it, it's pretty big here, but you know, look at it in paper because it's awesome. And there's other little app called Hikeable that we really liked, which is just an app for looking at various hikes you could take around the world. And we liked this app a lot because it gave you contextual information about the hikes and it told you points of interest and it split it up into days and you can kind of go through and see what you might see on each day of the hike and we kind of stole a lot of ideas from it. So if anyone out there worked on this, then I owe you a beer. So this is kind of what our final product looked like and it started out, we started out with this 1922 map and we let users kind of fade between all the layers I was talking about before so you can go from there to 2014 and then look at all the layers like the levee system, the canals and this is just the canals in that one area of interest, all of the oil and gas infrastructure so these are all the oil wells around the coast and all of the pipelines around the coast. And then for the kind of the scariest part of the app, we wanted to show people what would happen if we didn't take any action. What would happen if we didn't do any coastal restoration and how the coastline would recede even farther. But the most poignant part of the app I felt was when you got in a little closer and you looked at and you zoom into one of these areas of interest and we have a bunch of these areas where you can read people's stories and listen to them speak and see people's livelihoods that depend on this land and how they've watched the land disappear over time. And you can slide through this timeline and see the land change over time and you can see these orange points of interest here which are those genus points that don't exist in NOAA maps anymore. And we thought this was really, really powerful and people seemed to love it. And just to conclude today, I kind of want to play an audio clip from one of these because I think that this guy, Ryan Lambert, who lives in Burris, Louisiana, I think his words about land loss here I think are a lot better than mine and he's a lot more articulate than me too. He's a fisherman down in coastal Louisiana and I just want to play something from him. Today we're at what's called the High Chaperrill Camp, it's sits on the, what used to be the Banks of Dry Cypress Canal. It was just a camp for weekenders to come and the people that owned it, we come out here and spend the weekends like everyone in Louisiana has camps throughout the history. People came to the camps for the weekend and it's been going for quite some time now as all the land has disappeared. You know, it's a different day. We've lost our culture actually, you know, the camps like the one right behind us. They were throughout this whole march. They had communities that people lived, people were born in those camps. They would go and run their traps out the back door and the front door, they would shrimp and crab and they'd take their pier over across the bay and have fish fries and stuff and just yell across the bay to the neighbors. You know, back in the early 80s or late 70s, you'd have to go and navigate your way to the Gulf of Mexico through bayous and bays and you know, just go all the way 6.3 miles through marsh with all the different fur-banging animals, raccoons and some neutrals back then but muskrat and otters, ducks as you pass, ducks would be getting up everywhere and now it's completely gone. It's totally open water now. It makes me physically ill. You know, it used to make me mad. Now it makes me sick. I have witnessed what people can only dream about in other states and for that to be gone and lose them more every day physically makes me ill. It's all relative to where you are at the time. You know, if you're a young person, you think this is what it's supposed to look like. All water intrusion is such a slow, intimidating cancer that people get complacent and they think, well, I remember back in the day but when you're old enough to know, it's too late. So I think he kind of put this project in better words than I could and that's kind of, you know, we want people to get a kind of visceral reaction when we create a news application and we make maps. And you know, when that happens, I think that it kind of shows us why we're all doing this and why we're like to make maps. I think this kind of says it all here. And so that kind of brings me to the end and just wanted to say that this is what we do. We like to kind of bring a sense of geographic accountability. And thank you so much for having me. Thanks to Fossil4G. This was fantastic. The URL right here is these slides if you want to read anything more. Thanks. Take a couple questions. I think everyone wants beer. I have just one quick question. Was the, were the oil lines that you displayed on your maps, how did you get that data? Is it publicly available? We got a lot of it from the state of Louisiana, a couple other sources. A lot of the oil pipelines are not available for national security purposes, that's what they tell us. But we tried to find as many as we could through different agencies. A lot of them are kind of from the 80s. They're not all up to date. So it's actually worse than you see on the maps. Sorry, I don't have a question. I just wanted to make a comment from the grass video exit. Redcock Cave Woodpecker is still endangered. Thank you. Wikipedia lied to me. It's been known to happen. Thank you so much, Al, for coming. That was fantastic. It was everything I hoped for.
|
Closing Keynote Speech, FOSS4G 2014, Portland, Oregon.
|
10.5446/31758 (DOI)
|
So, I was curious, how many people have heard of Cesium before this talk? Wow, awesome. How many people are using it or evaluating it right now considering using it? Okay, cool. Hopefully we'll see more hands at the end of this. So, my name is Patrick Hozi. I'm a developer from a company called Analytical Graphics. And for this talk, you know, I want to talk about Cesium, maybe like a pretty quick intro, but more about what's new since Fals4GNA 2013. So, Cesium is an open source JavaScript library for doing 3D globes in the web browser. It's built on WebGL, so it's very fast. A lot of developers have a long history of doing 3D graphics in OpenGL, so we've done a lot of optimized graphics algorithms. It's being used in a lot of different fields. It's fields that we didn't even anticipate it being used. Certainly, AGI's roots are in aerospace, so we have a lot of people using in aerospace field, but also geospatial is widely used, as well as some other ones that maybe we didn't expect, sporting, entertainment, even some real estate folks have popped up, people doing air traffic visualization, which is a great list of different problem domains. So, in terms of what's new, you know, I want to talk about what's new in the API, what are the new features. But the other thing that's new that's cool is there's a lot of applications that have came out built on Cesium, so I wanted to show some of those. I'm going to try to do all these live demos. I know the internet's been going internet out. I do have some videos as backup if we need. If everyone wants to start streaming video right now, that would be really, really useful for these demos. So, the first one is probably maybe the most well-known one, which is CEN. When people hear CEN, Cesium, they go, well, tell me about CEN. And we heard that a NORAD track CEN uses Cesium. So, I'll bring up this. This is the only, this one is a video because it is, it only runs on Christmas Eve. All right. So, NORAD has this big, big elaborate system for tracking Santa Claus on Christmas Eve. They have Sonar, they have satellites and geosynchronous orbit, they have fighter jets, they have Santa cans, and they use all of this technology to see where Santa is and to give us Santa's position so that on a web map we can show where Santa is with an animated 3D model here. So, this app is built on top of Cesium, AGI built this. We've done it a few years in a row now, but new this past year was the 3D model we see here, the animated model. And this is using a format called GLTF, the GL transmission format, which is something I work on with Kronos, which is, which will be an open standard for streaming 3D models to the web browsers. So, think of a format like Kalada, that's XML based, that's for interchange between 3D formats, 3D tools, I'll replay that, whereas GLTF is for getting it to the web for runtime visualization. In terms of WebGL acceptance rates on people who ask, well, where does Cesium run, how many machines can it run on? So, this went to 20 million users in December and we had about a 50% WebGL success rate. So, we reached 10 million users. And these are all sorts of users, this isn't just tech community, right? This is everyone, the general population. So, I think 50% is pretty good and 50% in rising too. So, we'll see what the stats are this December. Okay. So, next up is something, AGI is building on top of Cesium and this is for space situational awareness. Let's see if the link works. So, AGI does a lot of work in space and we're tracking satellites and we want to know where satellites are, where they will be and hopefully not if they're going to hit each other, potentially come near each other. So, this here is called the space book. Let me try to resize it a little bit. Oh, it's very big. Okay. And I'll zoom out and these are all active, unclassified satellites and this is a real data coming from AGI servers. I'll speed it up a little bit. And we can see them moving. We can zoom in and take different angles on them. And then I'll zoom all the way back out. And we can do things like we can take the stats and say, well, show me only operational ones. I know. There's a lot of debris out there too. It's wild. And then this is still our full 3D globe but it also does 2D and that's the same API. It just does, it just switches it to 2D and you can spin around and all that kind of stuff. We can take it to 2 and a half D. We call this Columbus view. This is particularly exciting when you have satellites because you see the Geo belt very clearly out here and I'm not a space guy but I'll pretend a little bit to be a space guy. So, and let's switch it back to 3D. And then this is just some image, this is natural Earth imagery that we tiled up using TMS and I'll switch it to Bing imagery. Oh, coming in pretty fast. And I'm going to turn on some terrain data. I know some folks are going to Mount St. Helens for the fuel trip tomorrow. So let's go do a preview. And here, another thing that's new with Cesium is the streaming terrain. We've now published the format. Last year we were streaming height map data. Here we were streaming mesh data, it's quantized mesh data that's really tuned for rendering. So it's very concise. We get in a web worker and we very quickly create the final mesh that we're going to stream in. And this is global terrain. So another place I really like to go is Seneca Rocks. And this is the place I used to rock climb when I was younger and this was also a training ground for World War II. I'll zoom straight down and this is amazing. So this fin, ooh, train is not coming up here yet. Well, this fin actually peaks like straight up. It looks, there we go. How many people are streaming video right now? So here, this peak is really amazing to climb. People climb to the top of it. It's like 600 feet tall and you can look out all around it. And I'll come back out home. So that's our SSA app. Okay. So let's see. Okay. So this D3 demo is something that we did last year too. This was actually a hackathon project. And we started this in one day, two of our developers, because we were like, well, hey, we really like D3. I wonder what type of apps we could do with CZM and D3 together. And the internet's looking like it's slowing down on us. Okay. So here we have the classic health and wealth of nations. That's data that you see with a lot of D3 visualizations. And you see the D3 visualization where on the X axis, you see the income. And on the Y axis, you see the life expectancy. And then over time, you're seeing how these vary. But what we're also seeing here is the Columbus View version of it in CZM. And these two are linked too. So I could click on one, I could mouse over one of these and it gets highlighted, or I could mouse over in D3 and it gets highlighted out here in CZM. And then you could also switch this to 3D. So it's still the same API. Cool. So I want to show two more apps. And then I'll go into some of the features. So NICTA is a company that's been doing a lot of work with CZM. They've been doing some very cool apps on top of it. And they've also been contributing a lot to the code base. So one app that they're building is the National Map. This is the Australia National Map. So Australia has fantastic open data within their government. And this map here is built between NICTA working with the Department of Communications and Geosciences Australia. So here they have a lot of different raster data. I'll turn some of them. And then a wider way of vector data. So I'll come down, I've been playing with the population estimates. I'll turn that on and that's an overlaid image layer. Then I could click on one of these features and get some metadata about it coming up here on the right. And then maybe I'll go over the broadband set and turn that on. Then we're getting full 3D. And I could go for info for some of this and I could maybe change the translucency. Let's make that really translucent. And this is in beta right now, but you can check it out. Everything that I'm showing here, this is all online. So you could go run these yourself. And the final demo I will show is also from NICTA, which I think is very exciting stuff here. So this is an app called Duorama that they built on top of Cesium. And it really features a Cesium streaming train and imagery in a sporting application. Over here I'll speed it up a little bit. See what these guys are going. Then I'll slow it down. So here we see gliders. We have several gliders going. They all had GPS on when they did their trip. So they have position, time, information. And then we're playing it back inside Cesium. And this has the streaming terrain data, which has probably 30 meter in this part, as well as the streaming imagery, which is Bing imagery here. And then there's some post-processing going on to get this kind of old school film effect. Then I could switch between the individual people paragliding. And look at each of them. And then each one, you get this height profile, this terrain profile. And it's really interesting to see what they change altitude. They fly out to change altitude. Then you'll watch them fly and go along the ridge, the ridge lines, the mountains. Very cool stuff. And what's awesome is, I mean, this is JavaScript and WebGL all on your browser. So this is dooramba.com. You have some GPS tracks for things you've done. You know, go upload it to here. It's pretty nice. Cool. Okay, I'm glad the internet held up pretty well. All right. So a lot of people ask me, they go, well, why did AGI make Cesium? You know, and it's a very fair question. A lot of times in a technical talk, I don't even address it at all. But it's such a common question. There's two reasons we made it. One was we had customers that needed cross-platform visualization. And we had a bunch of C++ and OpenGL code. And we thought about porting it. But at the time, WebGL was coming out. And we said, well, hey, if we can get on WebGL, we can run with no install. Don't need admin rights. Don't need a plug-in. We could go cross-platform, cross-device. And that would be awesome. So that was one reason why we did it. The other reason is we needed it. We needed it for SSI work, the space app that I just showed. And then the next question that comes up is people ask, well, how do you fund Cesium development? You know, if you look at the GitHub, there's quite a few people that are working on it. We've been working on it for quite a while. Where is the funding coming from? So AGI does sell commercial terrain servers, provide data for Cesium. And we're also soon going to offer support, enhance our space visualization and integration with some of our products. So we are continuing developing this. This isn't, we didn't just make Cesium and then throw it over the fence and said we're done with it. We're really active and it's growing actually quite a bit since our 1.0 release. But it's not just AGI working on Cesium, which is really awesome. We have quite a nice community growing. It's been on GitHub for over two years now. Then it's been in development for about three and a half. We started in February 2011, right, just about right when the WebGL spec was announced. There's 18 contributors from AGI and 17 from other companies where NICTA would be the most prominent contributor doing a lot of terrain and imagery related work. Also got some nice contributions from Raytheon on triangulation. And then some individuals who submitted individual CLAs are from major companies as well. Now, it looks like we have 35 developers. We do not have 35 developers full time. One day, I think we might, but today we don't. We probably have more like, you know, maybe three to seven full time on it, which is still a nice size team for it. And then the forum, we have a pretty active forum that has maybe 10 to 25 posts a day. It has almost 400 people. So it's a great community. So with WebGL, it runs almost everywhere now. So big news that I'm sure many of you have heard by now is IE11 does support WebGL. And the WebGL support is actually pretty good. CZM will run on it. And for a GPU bound application where a lot of the work is being done on the video card, IE11 performance is good. For CPU bound, I need to do some more testing before I'm going to, I'll probably do a blog post to say where I think performance is on that side. Opera is actually really good too. So Opera at this point is, runs on Blink, which is the fork of WebKit. So Chrome and Opera really are running the same WebGL engine. So if you can't run Chrome, Opera is a good alternative to it. iOS and Safari, you know, that's one of two ways. So I think Apple did make an announcement for iOS WebGL support coming out very soon. There was some informal testing done on the NICDA national map application and CZM does run, but it does need some performance improvements. I've heard that across the board from a number of WebGL developers. So we'll definitely get that. Hopefully the combination of us and Apple will get that quality up to snuff. We don't technically support Android yet, but we pretty much always run on Android. Certainly if you take out your Nexus 7, it runs really well. But we want to get 100% tests running and passing before we say that we definitely support something. So the WebGL has just been getting better and better over the past couple of years. So great platform for reaching a lot of people now. So in terms of what CZM does, the core features for geospatial users is map layers, terrain, vector data, and 3D models. Then you can do the 3D, 2D, and Columbus views. We do all the standard map tiles from WMS, WMTS, TMS, OpenStreetMap, and so on. Then for the terrain, I'll talk about it in a couple of slides, but we do have a quantized mesh format that we use. But we can also still do height maps. The problem with streaming height map terrain data is it takes more processing to get that to become a mesh that we're going to render with WebGL. And also for flat areas, you know, you're wasting data there. For vector data, we have a JSON schema that we're working on that Brian mentioned in his talk in the first session for this track, which is CZML. And that is for time dynamic visualization. So that's for not just describing spatially where the data is, but how it varies over time. So instead of just saying, here's a position, you can say, well, here's a position at this time, and here's a position at this other time, and here's how you can interpolate those. Or here's a color at this time interval, and then here's a color at this other time interval. So it's very flexible, and certainly we use it a lot for our satellite visualization, which has fairly deep requirements. And then, of course, we do GeoJSON and TopoJSON on KMLs is on the way. And then for 3D models, I mentioned GLTF, and I'll do a demo of model conversion for that. Okay, so one thing I like for CZML is, you know, CZM has an API, and we have reference documentation and examples, and you can write a lot of JavaScript code and go right down to our API. But I think a lot of use cases, you can write server side code or offline code that generates a CZML file, which is a JSON file, and then you can see, and that is your application. It's really server side generating your client. It can be very simple. So I'll try to bring up another example. If you just go right to the main CZML website, you can do this right now, and there's this click to interact button. And let's see how fast it comes up. So this here, these are a few satellites that are moving around, and we generated this using one of our desktop products called SDK. I'm going to switch it to a brighter map. And this application is really just processing the CZML file. So you have all the satellites moving, and you don't have any really client side code because that's all in CZML itself. One cool thing to show while we're running this app is I'm going to zoom out home. I'll switch to Columbus view. So this satellite is a Minaya orbit, where it sometimes comes really close to the Earth, and then it gets really far away. So I'm going to click on the Minaya satellite here. I'll get the info. And that info box is also that is part of the metadata in CZML. So I'm going to speed up the animation. I'll do it just for a second. I don't want anyone to throw up. But you can zoom all the way down, follow it all the way up. It's pretty cool. So yeah, so CZML, check it out. It's a similar relationship. CZML to CZM is similar as KML to Google Earth. Okay, so what's new feature-wise? Yeah, an awful lot. So we'll go through each one of these in excruciating detail. Okay, we'll just hit a few of these. So we did a 1.0 release August 1st. And we were in beta, you know, beta in quotes for quite a while, but we hit 1.0, which really means the API is stable, and now we won't break it release to release. We may deprecate and then break over time, but we won't break it right away. And the 1.0 was really a great thing for us. We saw a huge uptick in adoption and contribution like the very second we hit 1.0. And there's been a lot of big releases recently, too, right? Open layers three, and then we're going to upcoming, we're going to see a leaflet 1.0, too. So we did the 3D models, the terrain format. We've improved translucency with order independent translucency. So you have intersecting or overlapping translucency. It looks better. You may have some trouble on some older Intel cards, but in CZM 1.2 that's coming out August 1st, you can turn that off if you need to. We started a plugin ecosystem, so I'll bring that up. So CZM got features and plugins, and folks are writing code that works with CZM but isn't in the core CZM repo, and they're doing some cool stuff. So Oculus Rift support, subsurface visualization by Necta, Leap Motion Controller. So we're starting to see more and more of these. So please, this is also an easier way to contribute to CZM if you don't want to go into the core code. You can still keep it, and we'll list it here for you. We can do an unlimited number of overlapping map layers now, too, before we were limited by the hardware limits in WebGL, and now we'll do a multi-pass, and you can do a lot of them. Okay, so I want to talk a little bit more for 3D models and terrain, a little bit more on the tech side. So GLTF is the GL transmission format, and it is tuned and optimized for being fast to download and fast to render in WebGL. And the way we do this is we have a JSON file that describes the node hierarchy of the scene and the materials, just kind of that metadata of the model. It's very easy to parse. And then all of the geometry, the textures, the animation data, the keyframes for the animation, the skins, that's all in binary data. And that could be external files, or it could be embedded, base64 encoded, embedded into a single GLTF. And anything that you might see in a traditional model where you might have to triangulate the triangles, you might have to split meshes into less than 64K vertices, all these types of preprocessing things, that's all done offline. And the GLTF defines a spec that's very easy and quick to render with WebGL. So what we do is we provide a converter from Klota, which is a popular interchange format, to convert your Klota file to GLTF. And I'll show a quick example. So this is on cesiumjass.org slash convert model HTML. So I'm going to take, I have a, DIE is a Klota file, and it references two textures, and I'm going to drag and drop them here. And they're getting zipped, and then they're going to upload it to the server, and they're going through the content pipe on to do all the transformations we need to create a GLTF model, which then we're going to download, and we could load with the cesium API, and then we'll see a preview here. And these models aren't too big, so hopefully they will come back soon. And we will let that process, it will finish, I promise you. And we will move on because we only have three minutes. Okay, so a little more on the train data. So we're using meshes, so we have a pyramid of levels of detail, right? So think of it very similar to how you would slice up imagery data. And each, instead of having an image, each thing is a mesh, and that mesh is quantized. So we're using fewer bits, and it has some extra data, like bounding volume so we can do efficient calling and level of detail, as well as some edge information so we can make adjacent nodes look good together. So this is something that we've really tuned to make it really fast to render. We downloaded a little bit of processing in a web worker, get it off to the pipeline. Let's see. All right. Well, that's a bust, that demo. Not bad, although other demos went well. So just some statistics. This is 80,000 lines of JavaScript code. It's quite a bit of JavaScript code. We do use AMD, so you could use required JS to just pull in the parts that you need. And we have 76,000 lines of test code. The numbers aren't that important, but the ratio to me is very important. So we have almost as much test code as we do regular code, well over 5,000 tests with 93% code coverage. And what's awesome here is I gave a talk last July, and these stats were basically the same, and that we had the same amount of test code as we did regular code, and the same amount of code coverage. Because usually over time, you know, you could write like a thousand line app and you have 100% code coverage. And then over time it deteriorates, but we have not, it has not deteriorated, so we're always running the tests just to keep the quality high. Okay. So to quickly finish up, I want to talk a little bit about CSM and the false 4G ecosystem. You know, we have a lot of great mapping APIs here, and I want to talk about how CSM has been cooperating with some of them, and two in particular, OpenLayers 3 and WebGL Earth. Okay, so OpenLayers 3, great product. Congratulations on the release. And there's been a lot of interest over the past two or two and a half years really on integration of OpenLayers 3 and CSM, and you know, is that something that we're looking at? When will that be? And there's been some great progress there by camp to camp. And this is something that AGI is not working on. AGI is a big fan and big supporter of it. But this is something that Camp to Camp has been doing. And here we see on the left the OpenLayers map, and on the right we see a CSM window, and you see there's vector data on here. And you just write an adapter and you attach it to an OpenLayers map, and then when you start adding data to the OpenLayers map, you also see it in the 3D window. And when the OpenLayers 3 mailing list, there's an initial version of this, it should be out pretty soon. The other thing, WebGL Earth 2, so many of you may have seen WebGL Earth, then that has its own WebGL engine. They rewrote WebGL Earth 2 and it's now built on CSIUM and it exposes a leaflet compatible API. I think you just need to rename the namespace and it's supposed to work with your existing leaflet apps. I have not tried it myself other than downloading the initial example, but it looks very promising and it's a nice, maybe very easy way to get started if you already have a leaflet app. It was very cool to see kind of the collaboration with, especially with CSIUM, kind of providing the underneath 3D for so many different apps. So maybe outside of the open source world, you know, we're seeing a lot of things happen with the Google Earth plugin. 64-bit Chrome has actually dropped support for the plugin and then Google has announced that they're going to drop NPAPI, which is the plugin technology from Chrome by the end of the year and it looks like they are going to go through with that. So we're seeing a lot of folks like Cube Cities here, they have a very cool screenshot that are now looking at CSIUM. They were Google Earth plugin developers and they're using KML and now they want to go to CSIUM because, you know, I don't, I think WebGL is going to be around for quite a while. So as Cube Cities, they do a lot of real estate visualization like showing me the hotels and highlight them by how much they cost for a given night and here's some early work they're doing with buildings and CSIUM. I'll finish up just with the roadmap. So on the near term, you know, 3D is awesome, our tech is really cool, but we need to be able to get your content into it for it to be useful. So we're improving CSIUM on the documentation side and doing a little more outreach for it and then we're looking to hit KML hard and do a really good implementation of KML, including all of the graphics improvements that we need to do underneath it. So you'll get the KML import and then also more CSIUM API to get better, you know, get better graphics. Then we're doing a little more community outreach than we've been doing in the past, so I'm really excited about doing that. And please jump on the forum as to those particular features that you're interested in, you know, there's some good conversations that are don't be shy. Please tell us what you want. So that's all. Thank you again so much for staying for the last talk. Who has the first question? Thanks. That's great. Just two questions for the rock climbing example. What kind of data did that use in the background, like what was it, DM and what was the satellite layer? Sure. So for the Seneca rocks, that terrain data there is NED data, which I believe is three to 30 meter in the US. And then the imagery there was Bing Maps. Okay. And then the second question was, I have actually, I'm working with somebody who had used Google Earth and obviously cannot use that in the future. So he was just, what would be the best format to get, for example, data out of post-JS? We have been using KML and Google Earth's plug-in. So would that KML be okay on CZML or would you rather do some other different format? So we're working on KML now, and, you know, we'll be working on it for the next several months. So one thing you could do is go on the forum and tell us what parts of KML that you're using, and we can see how that aligns with what we're looking to do. In fact, we might are, if you're using a small enough subset, we might already be supporting it. But if not, there's also GeoJSON and CZML. Thanks. Hi. I was wondering what tools are available for generating CZML? Yeah. So we have an open source project called, oh, and hey, just for the record, guys, here's the converted model that I just refreshed. We have an open source project called CZML Writer, and it's on GitHub. And this is a C-Sharp and Java library for writing CZML. There are some community projects as well. There's a Python library. And in here, I believe, I'm not sure if there's any converters anymore, but there's definitely just the writers to make it easy. So at least Java and C-Sharp for sure. And I think there's a Python one out there. What kind of options would you have for putting in something like X3D or CIDI-GML? Yeah. So we're really interested in doing 3D building. So I've been looking a little bit at CIDI-GML. Most likely what we'll do with CIDI-GML is something similar to what we do with the terrain data, where we will convert it to a tiled format that has binary nodes for rendering massive buildings. We'll also consider maybe doing CIDI-GML directly, which would be fine for small cities. But to me, CIDI-GML is very similar to Klota, where it's good interchange, but not necessarily good for rendering. The difference here, though, is if you're going to convert CIDI-GML to a tiled format for rendering, you have to preserve the semantics, right, because people want to know that that's a door and that's a window. So, but if you have building data, you're interested in that, please email me, because it's something that we're looking at and we're really interested in moving on. Can you describe a little bit about converting from the net data into the open grid or the open mesh, sorry? Yeah. So there's a, whew, there's so many different source terrain data formats. You know, so we use Google to read it, and then we have a content pipeline that runs on the server to generate those tiles. And that is one of the value add things that AGI sells. However, we don't want to lock anyone in, so we publish the format. So if you don't want to use AGI server, you know, you're welcome to use it to write the format directly. I feel like they'll already support stuff to format that other apps don't perform it. So we call it quantize mesh, and that is the format that CZM uses for rendering terrain. But you could also do tiled image height maps, but it's not as performant. I really don't recommend it, but if you already have that data available, I mean, you can do it. More questions? If you have terrain data and point data, can you project that point onto the elevation map? So you have, like a point, are you talking about rendering point clouds, or you have a point and you want to know what the altitude is of that point? You have your 3D mesh, and you have a point on that mountain side, and you don't know the elevation from your point data. Can you project it onto the map? Yeah, so we do have functions to get altitude. If you have a lot longer, you can get the altitude, and we have an example online where over mount Everest, we sample, we put a bunch of icons or place marks, and then we bring them all up to altitude. So, yeah. Other questions? This is clearly not, I don't know, probably not what a lot of people want to do with it, but I mean, the really rich 3D capabilities, but also an amazing number of other rich capabilities that would also be useful in a flat 2D space, you know, like the animation change over time animation. Is it easy to pop from the oblique view? Can you pop from the oblique view to just the standard old, you know, looking straight down on the earth view? That's probably quite easy to do, right? Yes. But that said, do you see a niche in just doing kind of making advanced visualizations possible easier in a traditional GIS view? So for 2D, I mean, we continue to support 2D as we do 3D. I can't say that right now, maybe in the next six months for AGI's roadmap that we're going to be aggressively looking to go beyond with 2D, but certainly it's a possibility. I mean, we are working closely with OpenLayers 3 guys, so we kind of see a nice collaboration there. So, you know, I can't say that we're going to go beyond, and certainly I'm not necessarily looking out beyond six months yet, but I can't say it's there in the next six months. Other, other questions? Yeah, hello. I tested a C-SUM in the firewall in a private network, and I had several issues connected to the datasets. I can choose from the side. Is there any special communication board? It should be open or something like that? So the question is that you're on a private network and you're having issues getting to the datasets. So we do provide some tile datasets that you could just bring onto that, onto the private network. Under the cesium plugins, there's a cesium assets repo, and we have, for example, the natural earth imagery tiled. And if you put that on the private network, I think you should be in good shape. The only thing that you'd need to consider is cores, cross origin resource sharing. So if you have one server with your imagery and another server getting your web app, you need to make sure that the server serving the imagery allows cross origin resource sharing. Okay. I have to correct myself. So the communication with the demo side, with the standard demo datasets, maybe can interrupt it, something like that. Is there a special channel or port that must be open for the firewall to connect these datasets? I don't think so. See me afterwards and maybe we could talk more about it, but I'm pretty sure that there doesn't need to be any extra firewall stuff. Any other questions? I think we have one more up here. The D3 demo, does that mean that D3 is built in or is it a separate library that you just built? Yeah. So the D3 app is an application that includes both Cesium and both D3. So that's not one. I mean, actually, it could be pretty cool for someone that write an API that maybe does some nice integration, but the integration there is done at the application level. We'll go ahead and add another question over here. Does it play well with other HTML elements on the page? Yeah. So the question is, does it play well with other HTML5 elements? And the answer is, yeah, it plays really well with them, which is one reason why WebGL is so great. So here's an application one of our users made where they did a lot of UI elements and these aren't, we didn't do this UI. A lot of this is their UI. So with WebGL, you know, this is just another div on the page and you can alpha blend them. You can put UI on top of it. You can even alpha blend the Cesium canvas and put things behind it. Yeah, it works really well. More questions? All right. Well, thanks everyone for staying after. Thank you.
|
When building 3D mapping apps, we no longer have to deal with closed feature-sets, limited programming models, temporal data challenges and bulky deployments. This talk introduces Cesium, a WebGL-based JavaScript library designed for easy development of lightweight web mapping apps. With live demos, we will show Cesium's major geospatial features including high-resolution global-scale terrain, map layers and vector data; support for open standards such as WMS, TMS and GeoJSON; smooth 3D camera control; and the use of time as a first-class citizen. We will show how Cesium easily deploys to a web browser without a plugin and on Android mobile devices.Since last year's talk at FOSS4G NA, Cesium has added 3D models using the open-standard glTF, a large geometry library and higher-resolution terrain.
|
10.5446/31759 (DOI)
|
All right, I think we're there. Hi everyone, my name is Yara Pellusen. I come from the Norwegian mapping authorities where I worked the last six years. My background is mainly within post-GIS work, backend stuff, some configuring of services, stuff like that. So me venturing into the world of 3JS and JavaScript in general has been a, let's call it a learning experience. So what I'm going to present today is the state of this projects mine, WXS.3JS, which is basically trying to visualize, make a 3D map based on services, WCS services, and initially WMS services, eventually also cached WMTS services. And the end product will be, is not, will be a sleepy map, the one we know from open layers, etc. So now that they closed the door, we could probably just start, I guess. All right, so no good presentation without an overview. We start off with just like the history background of the project, how it came to be. Then I will just go into the components that are part of the project right now, give a brief overview of those. Then since this is a visual project, I'm just going to go straight into the demo. I will use some time on GitHub, just looking to the code, showing everywhere, everyone where stuff is, and also hopefully I will be able to do a couple of live demos, knock on wood. So yeah, I'm going to try to summarize afterwards by going through a good, bad, ugly, and also try to sketch out the future, as I see it, and end up with a summary. For those of you who read the description of this talk, you will have noticed that I'm actually looking for developers. It would be really cool. But, and also the, I should mention that the project is a MIT licensed, so you're free to basically do whatever you want with it, if you find it, find it to your liking. All right, so the history. The idea came about, I was attending a hackathon in Norway with the like cultural ministry, which I was there representing the national mapping authorities, trying to educate people on how to use our data in their products. And being in such an environment, you tend to get, well, inspired really, because there's a lot of people around you, they're all hacking, they're all got their ideas. And I, sitting there, I remembered reading this blog. This guy, Björn Sönlik, wrote this blog post on how he basically took a digital elevation model and married it with 3JS to create this. I thought, wow, that's really cool. I want to do that. Then I took a, I figured, well, since I work with services mainly, that's my main job, that's what I do, I create services. And for a long time, the longest time we had this WCS service, which I don't know, is everyone familiar with WCS? Is that, oh, sorry, is anyone not familiar with WCS? Everyone? Okay. WCS is a raster-based, it delivers rasters, it's like WMS, except you get the data instead of just a picture. So I made this WCS service. The problem was no one was using it. So this would be a really good idea to sort of show off the service and maybe show how you can use, utilize the WCS service on the fly, et cetera. So I started out. Let me just go straight into this one as well. First thing I noticed was there was a disjoint between WCS, or the formats that WCS wants to give you. Basically, I only had it configured for GeoTiffs. Turns out browsers don't like Tiff files. They don't want to have anything to do with that. So that was a problem. So I could order the elevations, I got them, but I couldn't do anything with them. So, okay, well, figure, let's hack around that. So I set up the WCS so that I now can give XYZ files. That's fine and dandy. So now I get the elevations. And now all I had to do was plug in the WMS get URL. And basically what I ended up with there was, oops, let's see, it's better from this perspective, 3D model like that, which is created based on nothing but a bounding box, really. Give it a bounding box. It's going to query the WCS, it queries the WMS, and it meshes it together and wham bam, 3JS. Brilliant tool, really. So, yeah. For those of you who haven't used 3JS, I would suggest going to this place just to need to plug there. It's a wealth of inspiration in there. I will not go into the details of each of them. Just know that it's there. Okay, presentation, presentation. All right. So for the components. All right. It's very easy and it's very lightweight. The project has only been around for like six months. The code base is 1,000 lines. So I'm trying to keep it simple. But we need, of course, obviously we need 3JS for the visualization. After long last, I did find this TIFFJS, which enabled me to get around the problem of using the XYZ files. XYZ files is very, very big. I mean, you get, it's basically text files. So it's not compressed at all. So it takes a while to get the data. Using TIFFJS, I was able to read the TIFF files from the WCS. Brilliant little piece of JavaScript in there. Some limitations to it. I so far, I haven't been able to get an array of length larger than 2048. But I'm sure that eventually we'll get through that as well. Services initially, we use WMS and WCS. And again, for those who read the description of this talk, it says that we're planning to do WMTS. Yay! Actually, we do now. The current code will use WMTS, it uses tiles. There are some problems with it. And as much as I sort of broke the WMS implementation, so now you need to have a tile set. But it works as long as you're in Norway. But it's not a problem. You just, if you plug in another tile set, it should work out of the box. And it is easy to fix. I've just been sort of busy doing other stuff. Aren't we all? So demo-wise, first, oh, no, no, no, you weren't supposed to see that. Okay, so get up page. We are three people who committed to do the project so far. You can tell that this part over here was the initial, when we initially did it, you'll find that this is the vacation. And then this is me getting nervous before this talk. But there's been, like I said, it's not very six months old. There is a total of a thousand lines of code, about 80 commits in total. So fairly young. The structure is, again, quite basic. Got your external libraries here, get your source code in here. I'm just going to go briefly through the organization of this because, oh, yeah, I didn't mention. None of this is documented. The page where I show you the labs.CardFerke.no page has some documentation of the parameters that it takes. You can parameterize it using SRS or like coordinate system projections or you can use your own WCS in the URL, which layers are you using, what coverage are you using, stuff like that. It's in the article. But the best place is actually this part. Fairly simple, really. This thing here checks in the URL you're using to see if any of these keywords are there. Fetch its value and assigns it to the variable. Then it's reused throughout. What else is I said? I was saying this wasn't documented. This is documented. All right. So that was the universe part of it. This one everyone should ignore because it's not there, really. So then you have the WCS and WMTS loaders, really. What they do is they basically enable you. The WCS here takes a TIFF file and would basically iterate through a geometry, a TIFF, sorry, a 3JS geometry and assign the Z values to the corresponding vertices throughout. The meat of the project is in this file. It is not very big, but it's the thing that sort of tries to sew everything together. It is where you will find it. It will sort out what the different bounding boxes are. It will fetch the tiles and, well, it's where everything happens. It's a big factory. If you ever tried to use this, this is probably where you will start scratching your head and say, what on earth is going on here? Yes. So now is when I show you the pretty pictures and pray to the techno gods. Let's start with this one. This is the standard area that I use for testing purposes. So basically, we have your 3D model right here. In this case, I preloaded this because I don't, but it should load fairly quickly. Notice also that this is using a WMS. That was, now let's try the sleepy part. Hopefully this works because in theory, yeah, here we go. That was a bit scary. So I'm going to try to refresh this page since it seemed to work. Notice also in the URL, you have a bounding box up here. You have a layer defined at the end. And now it doesn't load. Yeah. Look at that. Luckily, I have another demo to show. I figured I'd just show off how you can use, this is just my example implementation. You can make much more fancy. I'm not a designer. So if you're a designer, you probably have a, you probably like get your head down in your lap right now, scratching your eyeballs out or something. But the idea here being that you define a bounding box over here and, God, this really, I hope this works. And what you're not seeing right now is the 3D model appearing where it should be. Yeah, yeah, oh, I forgot. I forgot. I forgot. Of course, it's nighttime. Yeah. Yeah. That was too bad. That was going to be the big piece de resistance. Look at that. While you're not looking. So the next thing I was going to show that was supposed to go really fast is you can change your layers over here and you click the button and it's nighttime again. Yeah, okay. I swear this works. Anyway, that sucks. Oh, okay. Excellent. Okay. So, yeah, okay. So here you have your 3D. See, this is really fancy when it works but what a downer. I swear it's faster when, but right now I know this is from a server in Norway and probably the network here isn't. Yeah, I'm playing the network. Okay. So you're all seeing it work. That's great. I'm not going to try any more demos. So we jump to the good, bad, and ugly. So the good stuff is it is, should be, fairly easy to use for when it's set up properly. The fact that you can just define your parameters in the URL makes it easy to just construct your URL and then you bang, you get your 3D model up and running. I also listed the potential there. Seeing as, to my mind, it does have a lot of potential. I know there are people out there who make much more fancy graphics and they make their globes and everything is looking really nice. This does one thing and hopefully it should do it really well. I listed a few bugs. Colbeys of a thousand lines. There's a limit to how many bugs you can get. It does still have some do pop up now and then but so bad stuff. Well, low maturity, that's a given. But I find that it's really rewarding to work with it because there's still so many things to do. So you can actually work on something. You don't nitpick on a small bug that no one's ever going to notice. You can actually make new features in this. There's no projections. That's part of the keeping it simple. Right now, you sort of have to use the same projection on your WCS as your, whatever you're overlaying, WMS, WMTS. And yes, as I mentioned earlier, the lack of documentation can be an issue. Ugly. Yes, some of the code is ugly. Particularly the part when I just, I think the latest commit I did tried to average values in the corners between tiles. That's not easy and I have no idea. I'm not sure that I even can understand the code myself right now. So hopefully someone will make it more legible at some point. I got to the point where it worked and then it was hands off. Clunky navigation. Yes, so far I haven't done anything to the navigation. Just use the standard one that I found in an example somewhere. And that is a big one that should be fixed. It is a big piece of work getting that because it ties into a lot of other things. Like if you want this to be a true slip map, you want to be able to do, you need to be able to change some levels and stuff like that. And that ties into navigation of the camera. So it's a big piece of work that needs doing. Yeah. And as I mentioned earlier, right now it's sort of a hard code to use WMTS. It could be fixed fairly easy but it's going to be ugly because if you try to use this and you're sort of outside of Norway, it might break entirely. Yeah. So it brings us to the future of this. I imagine, well, these three things, two of them, the first two should be doable in my lifetime. Changing zoom levels dynamically. Right now it doesn't do that at all. You get the zoom level that sort of corresponds to the initial bounding box you start with and that's what you're stuck with. But there is some considerations taking to account there with like level of details because, you know, the classic way of doing this is that you have higher resolution tiles close to where you're looking from and then you have lower resolution tiles further away. The problem is that then you then query different zoom levels and if you have different generalization, it tends to start looking ugly quite quick. It's my experience anyway. Of course, the other way to do it is to create a, to tailor WMTS and WMS service for the specific use but then that wouldn't be general enough. I want something more general than that. So I haven't really tackled that problem yet. All right, optimizing geometry. Yes, big one. Right now we load tiles, we load geometry and we just sort of stack them together, do a little bit of edge trying to sort them together along the edges, not much more than that. I have read documentation on how to merge it. Still haven't really gone into that one, particularly because I haven't figured out whether or not we're going to reuse the geometry or just get some new geometry. All right, those are the two ways to go. No idea which one is the better. Still, I'm fairly optimistic that changing zoom levels and optimizing geometry should be quite possible to do quite soon, which brings me to the third one, the WFS. Yeah. I imagine the first one, importing geo-json could be done. I mean, there must be libraries out there. You can read geo-json from WFS and then sort of mapping those up to 3JS and its geometry types should be fairly easy. GML, I'm not as certain. Well, I put it up there because it is an interesting problem that could be tackled. So, yeah, just put this one together so everyone wants to take a picture. This is the information you need. And I'm just going to wrap it up there. Thank you. Does the fact that we really just need 2.5D help the ability to do the navigation, the slippy part, so we can get rid of the pivoting part? And then that makes it easier to just navigate kind of up and down and forward and back, side to side. Yes. Yes. I thought I should probably make that distinction earlier on because someone was bound to point out that it's not 3D, it's 2.5D. But, yeah, that is basically what I'm leveraging right now. Do you do any level of detail rendering with the trains or do you have any plans for really large landscapes? That's the thing I really haven't figured out how to do yet. I mean, the geometry is easy. But getting the graphics to tie together, that's the problem. I mean, I've asked several people about this and some of them probably know, most of them probably know a lot more than me. And the best advice I got so far is, yeah, use fog. Use what? All right. That's it.
|
We will demonstrate the use of WMS and WCS as input to three.js for rendering an interactive 3D-model in the browser, based on actual data. Our extension of three.js, (WxS.threejs) takes several inputs, the boundingbox being the simplest. WMS-service, WCS-service, LAYERS and many, many other variables can be set either via the URL or predefined in a quick and easy setup. Further, we wish to introduce the idea of using cached WMS as an input. We've prepared a framework that is ready for consuming WMTS, rendering tiled 3D-models that are stitched together. The goal is to implement a slippy map in 3D. This is an open invitation to all hungry hackers who wish to join us in tapping the potential.Original article:http://labs.kartverket.no/wcs-i-threejs/Example:http://labs.kartverket.no/threejs/wxs.three.htmlGithub:https://github.com/jarped/wxs.threejs
|
10.5446/31761 (DOI)
|
I will be presenting today work of my colleagues at North Carolina State University. It summarized in the talk named GIS-based modeling with tangible interaction. So let's see what's behind this fancy name. So as you know, landscape controls many natural and anthropogenic phenomena. Light influences water flow, erosion, inundation, but also for example visibility of terrain features or solar radiation. And to make better land management decisions, it is important to understand the often unforeseen impacts of landscape changes which can occur naturally or can result from some anthropogenic interventions. So to understand these impacts, we can try to explore different scenarios and see how the scenario will behave. But to do that, we need to incorporate the terrain modifications in the digital elevation model. Usually through dedicated software like GIS or CAT software. And this can be pretty unintuitive and tedious, as you might know. So we are kind of limited by the two-dimensional display and mouse and stuff like this. So instead, we can modify the physical three-dimensional model which is intuitive, doesn't require any prior experience with the software and enables more people to interact and share their ideas. So the three-dimensional model, it represents the tangible user interface. And the tangible user interface, unlike the graphical user interface, takes much more the advantage of people's ability to sense and manipulate the physical world. So the idea of modifying the physical model instead of its digital representation, that's the key concept of tangible geospatial modeling and visualization system which we also call shortly, 10GOMS. So in 10GOMS, we couple the physical model with scanner, projector and GIS software. And in this way, we can model and explore how landscape changes influence the studied phenomena. 10GOMS is not a completely new system. However, recent advances in technology allowed us to improve the system and make it much more accessible as I will explain later on. So this is a picture of the physical setup. So you can see that we have there the computer with GIS software, then the physical model itself. Above it, there is a scanner and a projector. So instead of using laser scanner, which we did before, we started to use Kinect sensor. The advantage is obvious, the price. So from tens of thousands of dollars, we are getting to $100, $200. And also, because we couple this system with open source, GRAS GIS, it together makes this setup much more financially accessible and even for individuals. So how does it work? So we have this feedback loop. When we modify the model, scan it, import the scan into GIS, we run a geospatial analysis on the modified digital elevation model project result back and evaluate the changes. And this all happens in real time. So this rapid feedback loop allows us to study the phenomenon and allows us better design, better and more creative design. So this will be just a short demonstration of the system and I will show more applications later on. So we have here the physical model from send and we project auto photo and dynamically computed contours and water flow simulation during a storm event. And now as you can see, oh, okay. So the road gets flooded quickly. So what he's trying to do, he's trying to prevent that by like raising the road. And as you can see, the flow pattern changes and it results in a like a pond where the water accumulates. So you might be wondering how we actually create the physical models. So we use manual ways, automated ways and a combination of both. So when we first look at the manual ways, so we use send. But it's a special send and rich with polymer. It's basically a toy which you can buy online. But it is nice, like it sticks together and so it has nice properties. And then how we create the model. So you can, the easiest way is to project the contours of your digital elevation model and the color representing the elevation and you just form it with your hands. To get something more precise, you can then scan this model and do the subtraction and you basically get the difference between the current state of the model and the desired digital elevation model. And the colors can show you, oh, you need to add more send here or there's too much send in this place. Here it's the red and blue color. So this is the simplest way. But we also, we are doing printed models. This is still a little bit expensive and mainly the size is the limitation of this. Because we don't have access to any 3D printer which would print some larger models. So that's why we tried to use the CNC router to carve the model from MDF medium density fiber board. And there's obvious no real limitation of the size and it's actually cheaper too. But it basically depends on if you have access to either CNC router or 3D printing machine. So we have these carved or printed models but we want to modify them. So what we do is that we put a layer of the send on top of those models. And to get back the precise surface you can carve, you can use the digital elevation model but invert it and carve the model, the second model and use it as a mode. And you press it as you can see on the picture and you get a precise surface but it's still modifiable. This is just a summary of the workflow for the CNC router. Basically you either have the digital elevation model ready in Grad GIS or you can derive it from, for example, Ider data which we do quite often. Then you export it as ASCII file and then we use Rhino to process it for the CNC machine which involves generating toolpaths and exporting the numeric control file which can be read by the CNC router. So now I will talk a little bit more about the software part. So there are two main components. This is the Kinect scanning application. So we basically took the samples, the Kinect for Windows samples, these are like sample applications and we just customized it for our needs. So this is doing the scanning part. Then we use Grad GIS for the modeling part and I will talk about it later on. And then like a glue we use Python. Python because that's like the easiest way, at least for me, and Grad GIS has bindings like API for Python. The software is available on GitHub. It will be, the GitHub URL will be on the last slide. So we choose Grad GIS because it is a very comprehensive GIS. It allows us to do hydrology modeling, wildfire, solar radiation, geomorphology and so on. Another important thing is that it supports 3D rasters, which is not very, you can't see this very often with GIS software. And we take advantage of this as you will see in the applications. Moreover, now it has also advanced spatial temporal handling capabilities. So it in Grad GIS has temporal framework and it allows you to much easier handle large time series of data. And for us it's interesting because when we compute some simulations, either solar radiation during the day or the wildfire spread, we deal with this time series. This also includes some newer visualization tools, like creating animations. You can imagine that animations are pretty important to us for projecting the animations on the model. And then it's Grad GIS talks to Python. That's an important thing. It talks also to some other applications like typically R or some others. And then it's open source. So we can modify it for our needs. And what is great that we also can give back to the project because we already fixed a few bugs because we are running again Grad GIS kind of on the edge in a certain sense. And we also developed some new tools which also other users can use now. So I would like to summarize the applications which we came up with. So Tangeums is a collaborative tool. So more people can interact with the model. It's creative. You can imagine that people get creative when they play with sand. So it's a great tool for geodesign because you can test these different scenarios and you can immediately see what is the impact of your changes if you can improve it and so on. Then another thing is GIS education. So we would like to incorporate this in GIS courses because it shows it helps students to understand some algorithms like some water flow algorithms. And another thing is that it is also a tool for testing some new algorithms because you generate new data. And this can help to understand where the algorithm might not perform as well. So now we are getting to the applications. So this one, so we are looking at Jockey Street at North Carolina. It's a sand dune and it's on the outer banks. And you are simulating the impacts of storm surge. It's a simple simulation but it shows you where are the weak places where the dunes are vulnerable. And you can play with it and imagine that you could build up more dune in this place and it could prevent the water to flood the buildings. Then this is just a couple of pictures where we were designing, students were designing new residential quarter in a place near, it's actually in Raleigh, very close to the university. So they were trying to assess if it influenced the watersheds there and how the erosion would look like. And this is an example of the educational usage and also testing of algorithm because here we are running special algorithm called Geomorphons. It identifies landforms like ridges, valleys. It's a new, quite a new tool in Grass GIS and it is based on visibility analysis combined with some computer vision approach. That's just interesting. So you can, in this way, you can explain students of some geomorphology class, what they are talking about and also this algorithm. In this example, we were trying to use or this example shows how 10 jobs can be used for decision support during wildfire. So Grass GIS is a wildfire modeling tools which are used here and when you look at the model, you are basically looking at the tree canopy. This was derived from LiDAR data and we can simulate the fire break by removing the tree canopy, the sand from the certain place as you can see now. And you can immediately see how the fire spread changes. In this case, we were not lucky, the fire just went around. This is another interesting application. This was a project of a student and I was helping him with this. So these are coal ash ponds near Cape Fear River in North Carolina. And so these coal ash ponds are dangerous in the sense that there is a lot of toxic waste in them. And so he was interested in assessing what would be the impact of some catastrophic event like that the pond would break and all the water with the waste would spill out. And interestingly, it wouldn't really get into the river. It would probably just go into the underground and it would cause damage anyway, but not in a different sense. And I think it's the last one I have here prepared. It's quite an exciting application where we use it for 3D raster visualization. So just to explain, we have these points. These are samples, 3D point samples with soil moisture, percentage of soil moisture, and we interpolated the 3D raster of the soil moisture. And now, so you can see that the sandbox is basically, it represents the 3D raster. And when you dig a pit or do some depression, it visualized the values of the 3D raster in that particle, place, and depth. It is the same as you would just come somewhere and dig a real pit and just looked down there what's in the pit. So it is computed as a cross-section of the scan surface with the 3D raster. And this is just the same thing where we, these are the same data. And you can see there's the order of photo to understand what kind of place is this. It's some field, some agriculture place, and triangles represent the locations of the samples. And so there is quite a deep hole inside. And when you add more sand into the depression, you can see that the values are obviously changed. So higher you can, you have lower moisture and so on. So it's like this. So what's next? So we are currently trying to incorporate tangioms in the GIS courses. We are doing workshops with landscape architects. And there's a lot of things to do. You'd like to try, for example, different ways of interaction. You can interact with placing some fabric and interacting with, like, saying this is some particle landscape class and so on. So you can interact with colors or with different textures. And there are other things. Also we would like to stop using Microsoft Windows. Because now we are, currently we are, we have to use Windows because of Kinect. Kinect doesn't work elsewhere. So either we can find a way how to use Kinect elsewhere or we can also consider using different sensors. The sensors are really developing very quickly now. So we will try, hopefully try different solutions also. And a related issue is that so far the workflow for preparing the 3D physical models with the CNC router, so the workflow is not completely open source because we are using Rhino software. So that would be my other wish to find some completely open source way how to do it. There are some tools I already looked at it. But we will see. Okay. So thank you. There's my email, our website where you can find some other pictures and videos. And this is, their code is on GitHub. So thank you. Thanks. That was really cool. That's neat software you have there. I'm curious about the part about the landform identification. Uh huh. Could you tell me how many types of landforms you identify, what kind of accuracy testing you did on that and if you know if there's another software to compare it to that does that kind of stuff? Okay. So this is just, we are just using this module in Kratia. So I don't know a lot about it. But you can see basically what you see here, there's a legend. So you can see what kind of landforms it can do. I would suggest you to look at the manual page for this module. It's r.geomorphon. And on the manual page they should explain how it actually works. Thanks. Yeah, that's true. There's a paper on it. So you should be able to just Google it I think. Okay, great. Thank you. Maybe you said this. How large is the square that they're working with right there? It's about more than a foot? How large is it? This one is pretty small. It's like 30 centimeters or centimeters. But the other ones, so this one is like half a meter. And so my follow-up question was, do you feel that the larger one is adequate? Do you feel the need that if you had something large you could do much more and more precision and resolution and so forth? Of course, larger models are better in the sense that you can do much more stuff there. But then you also have to consider like the scanning part. So the larger the model you have to have the scanner higher, which means lower precision. So this is like the ideal. You can go even larger with larger model, but you get lower precision. With there being issues for a larger one, just doing the real time, would that slow down significantly and make it unfeasible? Yeah, that's another issue. And there are still ways how to speed it up. So we will work on this issue too. Anna, I was just wondering how long does it take to set up the model, the sandbox for a particular scenario? So if you want to create just the model from send and you have all the data you need, like the elevation model, then you can build it in five minutes. So this is pretty quick. Then if you would like to prepare the car model, then you have to spend more time preparing the basically the file read by the CNC router. And then the carving itself can take several hours. Thank you for the presentation. I'm wondering now we're looking at the fire spread modeling. How realable you would say this simulation is because I went to this, to Chris Tonnes from North American Forest Dynamics Project yesterday, the speech yesterday. And they analyzed fire events in the United States forests in the whole country for last 30 years and the patterns he presented and that was one of their conclusion. They're very geometrically, they're very complicated and it doesn't spread anything like what's going on here. This is a simplified simulation. So it partially depends on the software and on the input data. So we have this wildfire modeling tools and I think these are quite reliable. I'm pretty sure. But then the problem is always with the input data because it requires different kind of, so you need the fuel classes, you need the moisture and there are like several types of moisture which are taken into account. And from our experience, some of the data are not accessible anywhere and people just do some informed guess. And this can make a difference. And in this case, we have a, the wind is just blowing in one direction with one speed. So and you can also incorporate changes of wind and so on. I'm also wondering how does your software talk to CAD software? Is it possible? I'm talking mainly about this small video where you showed the water flows and like building up the road. How does this talk to CAD software, could it be used as a draft for an actual project afterwards? Yes, you can save the state. So you have the digital elevation model in the grass GIS and then you can export it. There are multiple ways how to export it either as like points or. Okay, thank you.
|
We present advances in the development of TanGeoMS, tangible geospatial modeling and visualization system. By coupling a physical three-dimensional landscape model, Kinect scanner, projector, and open source GIS software suite GRASS GIS, we created an intuitive tangible interface for dynamic modeling of real-world processes in response to different terrain data inputs. How does it work? You modify the flexible physical model by hands while it is being scanned, imported and processed in GRASS GIS. After computing a desired geospatial analysis or dynamic simulation, the results are projected back on the physical model providing you with instant feedback.With TanGeoMS we can explore how dune breaches affect coastal flooding, how the spread of fire is influenced by location of firebreaks or what is the effectiveness of various landscape designs for controlling runoff and erosion. We can add buildings and assess the distribution of solar radiation for different building sizes and locations or explore impacts of built structures and trees on line of sight and viewsheds. We will cover new 3D manufacturing technologies and materials we use to create precise physical models and discuss various options for single user and collaborative system designs. Whether you are involved in environmental modeling, decision-making, education or you are just curious, you will find this talk inspiring for your own projects.
|
10.5446/31768 (DOI)
|
music Okay, everybody, welcome. This is the second talk. Can you hear me? Is this okay? All right. Okay, well, it's the last day of phosphor G. It's already afternoon. This is the last talk in this session. So this is going to be a little bit of a lighter sided talk, but about a very important topic, actually. My name is Peter Lowe, and I'm going to talk about grass GIS, Star Trek, and old videotape. Oh, yeah. Okay. Basically, for those of you who have to flee earlier, what to take home? All these open source projects that we are having, they've already a heritage. Heritage comprises of software, of course, data, but also all auxiliary documentation information that's available. And this needs to be preserved and made available in a sustainable way so we find this stuff again in the years to come. And now this heritage footprint is growing because we have a number of, we have a multiplying number of projects, which also are interconnected, and the footprint for each particular project grows, and so it's growing for the total number of projects. So we really have to do something about it because otherwise we will face something like amnesia, which would not be so nice. Okay. So that's the nutshell thing. I work for a library. The library angle about open source here at the FOSFORG is if you multiply science, if you add science and open source, you end up with open science. Open science means science basically just can advance the knowledge is shared and communicated because otherwise we will reinvent the wheel time and time again. So from the library angle, if you cut across this and we can accelerate the sharing of information, we are also accelerating the advancement of science. I work for the National, German National Library of Science and Technology. This is the largest library for science and technology globally. We were based in Hanover. There's a little map up there stolen from Wikipedia. Hanover is home of the old rock band Scorpions. For those of you who remember the AIDS, yeah, that's a fan. That's good. Okay. We have a lot of documents there. We have 180 million documents on our Get Info Portal available for everybody worldwide. We have 125 kilometers of shelving, which is quite a bit, and we are working as a National Library in the field of engineering, technology and physical sciences. On the side, we also have the University Library for Hanover, and we have started the use of digital object identifiers for research data sets since 2005. Libraries are advancing. The future of libraries lies in what's called data-driven libraries. Those of you who attended my talk in the morning, you already know these slides, but I'm going to be really brief here on this. So the data-driven libraries of the future, they will be community centers for new services for innovative research and for what is called so inhibitous discovery. I'm not saying that the Harvard-Smithsonian Center for Astrophysics, especially their library, these guys are saying that. So you can blame them for this bold statement. But we agree, of course, with this one. So while scientists focus on the final frontier, data-driven libraries will work on designing a different kind of space full of physical and virtual tools, the capture imagination and enable researchers to explore it. So this is really the road into the future. Our fields in Hanover are visual analytics, content-based retrieval, ontologies and sense to zero. There's a bunch of challenges. I'm going to be brief about this one. So just to support scientists and get them their recognition, they really deserve. I want to make sure that they get reputation, which is the currency. That's the coin of science, the coinage of science. If somebody publishes data, they must be supported because data publishing is labor-intensive. Usually these people, at least now, are unsung heroes, which is not so nice. If data gets published, there must be a way that we can retrieve it later. So some persistent kind of identifier is needed to assess the data in the long run. The metadata set has to be complete. And of course, people will only use that as long as the authors have full control over their publications. Otherwise, they're not going to touch it. That's the concept of so-called data centers. The data center would be the connection between the researchers and an agency which provides digital identifiers, which is like an ISBN number, if you like, to the data sets. However, this is a nice concept. They're slowly getting there. There are more challenges and open questions than we like. There's an absence of policies, how to run these centers. There's an absence of infrastructure and data management, how to do that. And the lack of financing, most of course. Somebody has to pay for it in the end. However, there's a new hope. If you will excuse the Star Trek, the Star Wars quote. At least over in Europe, there's a growing number of funding agencies who demand long-term archiving and publication of research data. So hopefully this will be the means to get us a little bit of leverage to advance this. In the meantime, we are in Hanover. We are running a small project on that. It's funded by the German Research Council, DFG. It's called RADAR. And on the left-hand side, on the top, you see the standard approach, how we use to publish, how publication usually goes. You have a manuscript. You publish it. It ends in the library. For data and metadata, there would be such a data center, possibly a world data center. And this stuff would be made available via a web portal. So in this little project, just giving over this one, it's called RADAR, we're providing this infrastructure. So people, folks from any kind of science, can approach us and we set them up with a data center. Right away, and they can start doing their work. Okay. So basically, our strategy is to, as a library, to move away from just text and incorporate scientific films through the object, simulations, data, and of course software. Which brings me to Graz. Quick show of hands. Who is familiar with Graz? Who's not? Okay, welcome. Okay, but I guess we're pretty much family. That's good. Okay, just a quick run through. Okay, Graz is a very old dog. It was, development started in 1982. It's called the Geographic Resources Analysis Support System. It's one of the initial OSGO projects. We are having, the current versions are Graz 64 and Graz 7 beta. And it works nicely with QGIS and it has an awful lot of modules. But actually, Graz is not just software. Graz is people. And the community has been growing over 30 years and it's still alive and very much kicking. So in these 30 years, we have seen generations of developers come and go. We have some photos from the early days. Like Helena, you are on one of those and actually I put your portrait on the other side. So apparently you haven't really changed. And I guess it has been more than five years in between. This is just a small sample of all the people who are active in Graz. I just got that from the OSGO web page. There are many, many more people. However, it's the same thing. Like people retire from the project and it's really crucial to preserve their knowledge because otherwise it's lost and we have to reinvent the wheel again, which would be bad. But still after 30 years, seriously, nobody likes to write documentation. I guess humanity is not really evolving, at least when it comes to that. Instead, by now we have seen the advent of Web2Zero technologies. So YouTube and SlideShare are heavily used to put up little videos about what we do and how we do it. So actually these web portals like YouTube, they've become a treasure trove of knowledge. And when you check the Graz video, the Graz Wiki, it is so super easy by now to just do a screencast. There's a description, like a cookbook thing, how to do it. So it's really super easy. You can just throw something together for YouTube in 10 minutes if you like that. But this is not just Graz. It's also the other projects that we have in OSGO. Marcus Nitela over there was so kind to come up with this chart which tells us the years, how old the projects are. And you see, like, pretty much like some old dogs like Moss and Graz GIS. But we have quite a large number of new kids on the block. And I did the totals of videos available on YouTube for these projects. So, okay, of course, like this search is most likely not complete. But it's safe to say that we have about 50,000 videos sloshing around on YouTube. And this information is not really properly harvested at the moment. So for the heck of it, last year I started a search for Graz GIS. So that's what I got. I got some stuff which doesn't really have proper metadata. I got something Star Wars, Episode 1, The Phantom Menace. And this was recommended to me by YouTube. I don't really know why to be serious. But I get a lot of these strange search results. And if I really want to search cost efficient or time efficient, this is hampering me. I don't like that. Another thing is videos, they are moved to different locations. So for a while I can help myself if I have the URL. And if I have the URL, I will find it. But I actually have no guarantee how long YouTube will store the stuff. We just don't know this. So from a library perspective, we have this situation again. It's a similar chart that I showed you before. On the left-hand side there you have the manuscript of the publication and it goes and ends up in the library. That's the way it used to be. On the right-hand side you have data and metadata. You have your knowledge, your expertise in your field. Which is stored in private files. That would be a video, a screen capture, what you have on your machine. And you upload it to YouTube. I'm sorry, you're putting it in a trash can. You don't know how long it will be. Well, yes, it's up to you when you empty the can or not, of course. But still, it's a trash can. It might go away or it will go away. So a reality check from the perspective of library. Our communication channels are volatile, difficult to search and very, very hard to cite. So this is really an issue. So, okay, this is a wrap. Librarians really tend to interlink all this kind of stuff because we want to be a data-driven library. Now, here's the fun part. Let's talk Star Trek. Okay, and for that I would like to take you back to the 80s. The 80s. Madonna was young. Michael Jackson ruled supreme. There was a space shuttle flying. The Russians had their own. We had the Rubik's Cube. The Berlin Wall was still standing there. Computers were really small. And so we had to go to video arcades to play Donkey Kong, other games like 1943. Soul Cuts was busy developing Mars as the first open source GAS. Firefox was not a web browser. It was before the World Wide Web. It was a Clint Eastwood movie. And there was another movie, Back to the Future 2, which took us apparently back to the future. The year 2015. So that's next spring. Well, modern times. Okay, computer graphics were really like you got stuff like that. If you printed it, it looked like that. We started on Floppies. And thanks God we had some comics like Bloom County and Carbon Hobbs to keep us all sane. Okay. In those days, GAS started in 1982. In 1987, there were 100 users worldwide using GAS 2. The hardware stack for that cost $50,000 in old money. In modern currency, that would be $90,000 just for one GAS installation. That's quite a lot. And to get people into using GAS more, because by 88 they had more than 1,000 installations, a video was produced, the GAS story. Now, what is so important about GAS? One of the things is GAS is the grand daddy of OGC. Of all the web standards that we are using. OGC stemmed from the GAS Interagency Steering Committee, which was renamed into the OpenGAS Foundation, which was renamed into the OpenGAS Consortium, and that's now the OpenGIS Spatial Consortium. So this is just like why GAS is really the grand daddy of what we're doing nowadays. Okay, the thing is customer expectations by these days were driven by Star Trek. Let's go back to the year 82. That was at the bottom. I hope you can read that. There was the second Star Trek movie, The Wrath of Khan, which was just reimagined into Darknessflick last year. By 83, Jim Vesterwald, in those days the head of the GAS development, they had their customer at the fort, the type of McClellan, and they installed the software and they really liked it because they did it on the screen. But the first question apparently was, can you rotate that? Because people were sort of used to graphics they knew from the movies. And the developers were really put down because it took, had taken them so much effort to just put an image on the screen and in color. So, hmm, yeah, the vision was apparently already driven by Star Trek, which is quite astonishing. So, coming from that background, it makes, it's not really a surprise that when in 1987 the US Army Corps of Engineers for Search Laboratory had this video produced that William Shatner, that's Captain Kirk, like the original one, the actor was asked to narrate the thing and this video was distributed on VHS tapes. For those of you who don't recall them, that's how they looked like and that's the VCR. All technology. Okay, so this is old videotape now with little graphs in it. Okay, okay. Now, these tapes, they were rotting somewhere out there and the digital divide was bridged in 2004 for the very first time. There was the first Grass Users Conference in Bangkok and this Jim Westerwald had digitized the VHS copy of that movie and that was screened there which got a lot of hoots, of course, we loved it and Jeff McKenna, is he in a room? No, he's busy. Okay, he put that on the USB stick, carried it away and this file was eventually uploaded to the Grass Web portal where it was available for those who knew where to find it. Eventually somebody else, you don't really know the gentleman, but he uploaded the video in 2011 to YouTube with a limited set of metadata. In the meantime, the Wired Magazine, which is one of those hipster magazines for the World Wide Web, they got wind of the story and published a nice small article in 2013 making the statement that this movie is actually cool, it's sort of significant, but it's sort of in limbo because you can't find it on the International Movie Database, it's not on Wikipedia, it's not on Mr. Shatner's website and the title of that article is, I find it so somewhat disturbing, it's called Video Flashback 1987, Shatner tells you where you can stick your maps. I started worrying when I read that. Okay, here's a little excerpt from that article. They say that they're confident that if you like Shatner or like maps, you like the video, okay, but they're really certain that if you're a professional geographer and you've already seen the video 100 times, but you still get super excited every time you watch it and you can barely contain yourself at timestamp 150, then Captain Kirk voices tells you, don't keep your information about soils, vegetation, roads, archaeological sites, rolled up in map tubes or stuffed into drawers, keep it in a computer. Wow, okay, so that was where the 1980s, okay, right, okay, but let's go back, let's go, let's advance to this year, 2014. So this movie has been around on the grass website and on YouTube, but still people can't find it. There were discussions on the mailing list just two months ago whether it shouldn't be put this video on YouTube. People couldn't find it, so it was no way. So there was another successful attempt to bridge the digital divide. That's where my library comes in. We've developed some scientific alternative to YouTube where you get long-term preservation for a scientific film and you can cite it. And the grass video was chosen as a test candidate for the portal. So via Jim Vesterwald, the original producers, which is Roger Inman and Bob Losar, were contacted. Unfortunately, Carla Payton already had passed away. She was apparently the mastermind behind the graphics for the video. And Robert Inman actually discovered a high-resolution copy of the video, which he still had stacked up. So the good news is this is now available via our video portal. It's fully citable, it's searchable, and long-term preserved for the future. Okay, I'm going to skip this one. This is just about how this portal looks and how we can search the video, which is quite nice. Of course, from the portal you can download the video, no problem. You can download it as MPEG-4 in different resolutions. We will also have new formats. You can also order it as a DVD if you like. And it allows the citation by so-called digital object identifiers, which is just like an ISBN number for books, but that's an ISBN number for data. And when you spin the movie, when you watch the movie, this thing sort of has like a counter for the seconds in the movie. So the counter counts up, and you can cite pretty much like the time frame if you like. So this looks more or less like that. So on the left-hand side, we have an extract from the movie. On the right-hand side, you have the analysis of the spoken word. So you can actually do ASCII searches on that stuff. And so this thing about keeping it in a computer, I'll skip this one, that can be cited by the string there at top, which is a DOI string. And now that's a nice thing. A DOI string is not a URL, but our web browsers understand this notation. So if you just cut-paste that into Firefox, not the Cliddy's, but movie, it will fast forward you to a landing page. So you can immediately access the movie. And this technology is going to be valid in 50 years, under 100 years, or whenever. So this is really like long-term preservation. Here are just a couple of screenshots on why the new movie is so superior. Like on the left-hand side, you see the old quality. You see, I guess it doesn't really show here, but you have a much, you see, you get much more detail and much more color, which is on the right-hand side. Here's a nice one. When you fire grass up nowadays and you use the ASCII screen, it looks very much like the old screen. And actually in 87, they were already using the Spurfish test database, which some of you still know. So even after 30 years, some technology is really cool. Now, we got a bit disturbed when we compared this new video with the old one that you already knew, because there were subtle differences, like the one there with the bar charts. So, well, it's unfortunately not a director's cut, and we can't give you an alternative ending, but it has certain differences, which make it really interesting to see. Now, for the road ahead at TIB, we really want to go, we want to advance the long-term preservation, so we are really pushing our portal further. We also intend to archive the old alternative take of the video so people can compare both and cite that in the scientific work. And also, we are hunting for more old open-source videos which are apparently there and they are rotting, so we are really eager to take care of them. Now, but they are also the long-term challenges, and that brings us more into the past and our detox attempt. There's this quote about Dwarf standing on the shoulders of giants. I guess most of us have used that at one point. Now, the external view is like for the Pacific Rim poster, we could perceive GIS as something like a huge exoskeleton, a robot monster thing, which allows us, because we are so tiny, to manhandle data and grapple with this problem that we can't otherwise approach. But, well, this is not really open-source. I could build such a robot myself or could buy it, of course. Why not? But there's also the intrinsic, the internal view, that we are just like, please, excuse me, I was only able to find this image, so this is not about talking and the hobbit, it's more about the dwarfs. So many people are working, making our software better and better, so we are just dwarfs piled upon dwarfs. And this is the alternative. But then the thing is, how do we give credit to all these dwarfs upon we are standing right now? So this is a thing. And there are new ways now to visualize software evolution. Like we had, like in the old days, we had, like when we wanted to look at the different strains of grass, we used diagrams, of course. But by now, there are also, because grass, of course, lives in SVN, there's a project called GOURS, we can have code development animated. And it really looks very nice, and Marcus Nittolo over there, he deserves praise, because he developed such videos, they're all sociable, they're also no portal, and they sort of look like this, this is just a bunch of screenshots. You can really see how grass evolves over time, and actually also see the people interfacing with that, who is doing the work, and when. But there are still some open issues. Right now, it's not possible to immediately cite a version of the code from a specific time frame of the video, but that's sort of the stuff we're working on right now. So from 82 to 2014, grass has evolved, I guess we've seen that also during the last couple of days, but also has Star Trek. There were generations, there were new spaceships, there was a space station, we get a lady captain, Captain Kirk got reimagined, and also there were new interesting concepts, like the Borg. Who knows the Borg? Who doesn't? Okay, this one is for you. According to Wikipedia, the Borg are pretty much an alien race, which is a cybernetic collective, like these guys, they are fused together, they're a little bit 80s, they have these tubings sticking out of their head and the laser pointer attached, which was cool in the 80s, I guess, in the 90s, I'm sorry. But they act as one, they're just one thing together. But why am I bringing this up? This is a different project, it has definitely not been covered here at this conference, but an old NASA probe, which was launched in the 70s, and it was spinning around in the solar system, just came back to Earth about three months ago, and NASA had stopped collecting data in 97, so now a crowd-funded project was taking over, trying to control this old probe. But gee, all these manuals were lost, the people had retired from NASA, so these guys were sort of desperate, so they asked the collective mind on the internet for help and support, who had more information about how to control this cool sensor. And then they came up with an article, say, called, We Are Borg. Crowd-source, ISEE3 engineering, and the collective mind on the internet. And that's what it was about. They needed technical help. Okay, I'm going to be really brief about this, but in the article they state that a significant portion of humanity has by now reached a Borg-like status, not the laser pointer, but this thing about being connected, based on the internet, and we have this collective mind now for communications and information sharing, and they really wonder where this will go to. So we are apparently the OSGEO Borg, because we are connected, we're using the internet heavily. So in the past, the thing was we have to keep it in a computer. The present is right now, we're struggling with this triangle of code, data, and know-how. The librarians worry about long-term preservation, and some people are doing cool visual analytics on the very left-hand side. But for the future, it's really interesting, because this is, I think, what it really sets us apart from the proprietary guys. We can share all what we have. We are sharing our code, we are sharing our data, we are sharing our insight. If we tie this together, this is a powerhouse, and I can't tell you what the future will bring. That's going to be very interesting, and I really expect some leaps of efficiency when we tie this all together and tap that. Okay, thank you so much for listening. Of course, there's this beautiful portal. I have some material about it over here, also some gummy bears if you like, to attract you. And there's the promotional video, and this nice visual analytics, code evolution visual analytics thing is also there. And just one word, after the Soul Cuts celebration ceremony, there was the point cloud, the musical thing, right? Unfortunately, they can't do that, they can't come, so instead of that, this grass movie will be shown then. So see you there, upstairs, right? Okay, thank you. Questions? Hello, thank you. We know resistance is futile, I already see it there. I was wondering about the captioning you did. There was captioning you said of all the text in the video you can search by ASCII. Yeah. So that basically means somebody has been typing that from listening, I guess, or is there some automated way that you can do that? We are working on... Thank you. The question is how the text-based queries work. When we process these videos, we run them through OCR. So if there's text in the video that gets recognized, there's also a voice recognition. We are halfway there, that will come, but you are right, this text that you saw there, that was a transcript, somebody typed that. That's a lot of work, of course. We all be trying to actually archive the videos yourself, like of this streaming meeting. We all archive them or just point to them in your portal? Content which is included in the portal is archived in an over. But there's also the option for content which is preserved somewhere else to just have a reference. So that can also be done. All right. Thank you so much. Thank you.
|
This presentation showcases new options for the preservation of audiovisual content in the OSGeo communities beyond the established software repositories or Youtube. Audiovisual content related to OSGeo projects such as training videos and screencasts can be preserved by advanced multimedia archiving and retrival services which are currently developed by the library community. This is demonstrated by the reference case of a newly discovered high resolution version of the GRASS GIS 1987 promotional video which made available from into the AV-portal of the German National Library of Science and Technology (TIB). The portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. This is a reference case for future preservation activities regarding semantic-enhanced Web2.0 content from OSGeo projects.
|
10.5446/31485 (DOI)
|
Good afternoon. Glad to have the chance to present our study. This study looked at the relationship between the amount of disparity being presented to the viewer and the accuracy of the depth reported by them. This study was funded by Intel and also we collaborated with two of their researchers. What do you want from S3D? What's the difference between S3D and 2D? Because S3D looks more real as in the real visual world, right? But in fact, if you record your experiences viewing the movie content, what you perceive is something not quite normal. You feel the pop. You feel the looming effect. It's much stronger than what you perceive in real viewing experiences. It's possible that what we are seeing as S3D does not represent the real, naturally induced, disparity based, depth perception. In this study, we are trying to verify to see if, first, S3D viewers can actually accurately estimate the depth as predicted by the disparity. We also try to look at motion in depth and to see if the viewer can use the change in disparity to estimate the motion speed and make a response based on it. Think of hitting a baseball. So if the baseball is in the real visual world versus in S3D, it would do a response to different. What we want to know is if our perception is distorted by the way we present the disparity information. So we conducted two studies. In both studies, we have the same set of subjects. We have 60 subjects. They are all young subjects. They have good binocular visual quality and stereo quality. We present the stimulus on a 55-inch passive 3D TV. The testing environment was in a dark room. The subject seated 2.4 meters away, straight ahead, away from the TV. In the first task, we present static S3D stimulus. Three variables are manipulated. The first one is the amount of closed disparity, meaning the object is supposed to be perceived as closer to the reference. The second one is the size because in a typical S3D viewing environment, we have both monocular cues and binocular cues. We want to see if the binocular cue actually is competing, collaborating, or independent of the monocular cues. So we try to manipulate the size of the stimulus. We also have one set of subject view, each constant size trial with a particular size, which is either small, medium size, or large size. So the task is very simple. We have the subject seated in the room, view the stimulus, the subject near to poor of physical indicator in the dark room, and align it to the depth he or she perceives. So here is the physical setup. So it's a top-down view. Here is the 3D screen. We have a reference object, which is the diamond. And then we have four circles with the same amount of disparity, which could be either 36, 40, or 60. Now this is the physical indicator. It's actually right underneath the stimulus. And so the subject need to pull a pulley to move the indicator from the forest distance to a distance. So what I think is the same as the depth, the circles are located. So this will be the intended depth. So the subject move the indicator until they are satisfactory in terms of just two depth of the same. Now for the constant and proportional cue conditions. So if it's a proportional cue condition. Now if we have small disparity, then the image is supposed to be further away. So we have smaller stimulus. But the constant size stimulus, the stimulus size does not change in relation to the disparity level. So here's the result. Let me explain. So here we have the perceived depth. This is centimeter away from the screen. So meaning in reference to the diamond, how far the object actually reported as a way of finding the diamond. Now we have the two types of visual cue size cues. We also manipulate the disparity level here. And so for each one sort of subject, they either see small constant cue, medium constant cue, which is a gray bar here, and large constant cue. Okay, and the top three bars are when they were seeing the proportional cue. Now as you can see, we actually use the disparate cue very well. And you can see that when there's smaller disparities, we perceive it as much closer to the screen. But if we convert this number in relation to the expected distance and calculate the proportion of error, what you are seeing is actually a constant value. At least for the small constant size and medium constant, what you are seeing is even when you present no size cues, the size is constant. The subject can still use this particular cue to infer the depth. But there's a certain amount of error. In this case, small cues is between 6 to 8% of error. Now if you are a baseball player, you know this is not good, right? Meaning, you know, it could be the difference between missing the ball and hitting out the ballpark. Now the only thing very interesting here seems to be different is when you have a large constant cue. What happens is when the disparity is smaller and then you have large image size, you actually got a much greater error compared to the same disparity but with proportional cue. And this is easy to explain because, see, you have a very large image, but you have very small disparity. So the monarchy cue is competing with binocular cue. The monarchy cue says the object is close, but the binocular cue says it's further away. And that's when you got this kind of error. In the second task, we have the same stimulus, but in this case, the false circles, it actually moved from your disparity and toward the diamond. The diamond is rendered with three level disparities, 36, 40, and 60. The circle moved from zero disparity to 80 disparities. The task for the subject is to press a button right at the moment they predict the circle is going to intersect the diamond. Now the circle is actually moved in three different speeds, pixels per second. So over here. So this is the smallest, slowest speed with constant cue. You see the image size is not changing, but it's providing increasing binocular disparity cues compared to these which is proportional and in a faster speed. And you can see the rooming effect provided by the change of image size that is absent in a constant cue condition. And so here, those are the expected time of arriving at the diamond. When you have a lower speed and you have a larger distance to travel in depth, then you got a longer latency to respond to. So here is the result. It's the time error in relation to the expected interception time. So here is many seconds. Zero indicates the response time is the same as the expected time. And so what you are seeing is what? Overestimation of speed, meaning the subject actually predicts the time arrival much earlier than the actual arrival time indicated by the disparity. So notice that if the subject view constant cues, then they perceive it as arriving much faster than it actually is. And this is paradoxical because if the cue size is not changing, it's supposed to feel like the subject is not moving in depth, right? In fact, they are estimating it as moving much faster than the one with these particles. Now if we calculate the proportion of error in relation to the expected time, so here, all we are showing here is the proportion, zero indicating the same, and here indicates 10% of error in the negative direction, meaning the estimation is overestimated. It's faster than the actual predicted time. And so what you can see is if you have small lower speeds or if you have, see, larger distance to travel, then the estimation usually is more accurate. But there is no difference between constant and proportion of cue for lower speed target. But when you have larger speed target, you see huge difference between constant cue, which has greater error compared to proportion of cue. So what those results tell us is when we are watching the 3D TV, we are actually perceiving something that is greater than what is intended, which means if you use 3D camera to shoot a real world scene, what you are doing is you are exaggerating what is actually presented in the real visual world and increase the amount of motion speed and the amount of depth. And the average we have here is between 6 to 10%. So what is happening here is there is probably a mechanism in the brain that is adding monocle cue and binocle cue together. And when you add additional binocle cues, it's adding additional depth information to it and this make you overestimate the depth. And the same can be applied to the motion perception. The implication is, yes, your eyes are deceived in 3D. Now it could be a good thing if you want to exaggerate it, you want to feel the pop, you want to feel the looming effect. But if you are applying it to a sensorimotor learning, for example, if you are using a TV to change a baseball player to hit a baseball or change an astronaut to rehearse or a task is going to perform on a spacewalk, it's not going to be optimal. You need to adjust the render disparity in order to achieve the effect you want. Thank you very much for your attention. Very interesting talk. If you have a question, please raise your hand and Stephen will bring the microphone to you. Yes, very nice. Could you calibrate the measuring device? In the sense that you are asking someone to set something in depth to appear to be at the same distance as someone else? After each trial we will set the indicator to the farthest distance. What if you had a real object and you are asking them to set a second real object to the same depth using your apparatus? Yes, we didn't do that, but I understand your concern is we can go there and verify it. Yes, it would be really simple to do. Then what do you think the effect is due to? What causes the apparent underestimation? I think in a 2D viewing environment we use the monarchy queue and other monarchy queues to infer the depth. The visual stimulation provides depth information which is encoded in the brain. When you edit binocular disparities, this information is going to be processed as well. As some part of the brain preferably MT, MST, neurons are there to add them together, but they are not tuned to receive both information in this particular task. The neuron actually sort of its activity indicates a depth that is greater than what the disparity level is intended. If this prediction is correct, then if you change the individual to do more tasks like this, the neuron should be able to gradually tune it and to make the prediction more accurate. What monarchy queues do you think are causing that? In the proportional case, the size queues right. What monarchy queues you think you know? In this case, I think size is the most predominant information. When we watch the movie, the most striking information, the coin object is moving towards us, right? That's when the size is increasing. That's the most straightforward. But you have that right in the proportional condition. It has to be something else. Maybe accommodation. Or maybe... Yes, it's possible. Yes. It's very subtle though. I don't have any knowledge about how accommodation information is encoded in terms of motion and how they are integrated by its possible explanation. Another question. Jenny Reed. Yeah, thanks. Great talk. My recollection of the literature on apparently circular cylinders and so on. Do you need a little bit slower so I can catch up with you? I'm sorry. A little more American will be hard. It's very viewing distance dependent, I believe, these effects. I think your experiments were done at one viewing distance. My recollection of the literature is that a lot of it can be explained by assuming that we don't know our own virgins. We give too much weight to, I guess, a prior to an intermediate virgins rather than our actual true virgins. Did that explain at least some of your results? Yes, that's another possible explanation, which is when you have a greater amount of disparities, your convergence angle is more likely to be drawn toward the depths. As a result, your estimation of depths could be influenced by it. Actually, on Wednesday, I'm going to present a study in which we measure the virgin angle and we manipulate the disparity level and the information should be informative. All right. I have one question. Sure. What's the only difference in your presentation from actually having real objects approaching just the accommodative queue because the screen required a constant accommodation, the disparities and the size change where all the same as if it had been a real object was the accommodation that Marty mentioned, the only discrepancy where it differed from real objects approaching? I'm a movement person, so I study eye movement. So my inclination is to say no, I think virgins is also providing information in terms of the perception of depths. Some optometrists may not agree with it because there was an accommodation is much stronger and relative depths is much useful information. But real objects would cause the same change in virgins as the stereoscopic? And we did observe that when you are tracing the object in depth, your conversion angle is increasing but not in a rate predicted by the disparity. It's much less but it's increasing. So you think a real object would produce different virgins? Yes. There would be another study in which we compare the convergence angle and see if we can predict the amount of error is related to the amount of convergence late. And so far we don't have any quantitative evidence to indicate one or not. All right. Let's thank our speaker again. Thank you.
|
Purpose: The study evaluated the accuracy of depth perception afforded by static and dynamic stereoscopic three-dimensional (S3D) images with proportional (scaled to disparity) and constant size cues. Methods: Sixty adult participants, 18 to 40 years (mean, 24.8 years), with good binocular vision participated in the study. For static S3D trials, participants were asked to indicate the depth of stationary S3D images rendered with 36, 48 and 60 pixels of crossed disparity, and with either proportional or a constant size. For dynamic S3D trials, participants were asked to indicate the time when S3D images, moving at 27, 32 and 40 pixels/sec, matched the depth of a reference image which was presented with 36, 48 and 60 pixels of crossed image disparity. Results: Results show that viewers perceived S3D images as being closer than would be predicted by the magnitude of image disparity, and correspondingly they overestimated the depth in moving S3D images. The resultant depth perception and estimate of motion speed were more accurate for conditions with proportional and larger image size, slower motion-in-depth and larger image disparity. Conclusion: These findings possibly explain why effects such as looming are over stimulating in S3D viewing. To increase the accuracy of depth perception, S3D content should match image size to its disparity level, utilize larger depth separation (without inducing excessive discomfort) and render slower motion in depth. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/31486 (DOI)
|
Thank you for giving me an opportunity to present my work. My name is Ho-Sik Son in Kais, South Korea. The title of my paper is Dispect Remapping to Emulate Visual Comfort of Stereoscopic Video. For a couple of years, we have been working on the objective visual comfort metric to predict visual discomfort of 3D contents. Our research interests naturally to the question how to improve the visual comfort by adjusting the disparity of 3D contents. This is our pilot result of our research. The investigation of 3D industry has been reality with great economic success, as we know. We have between us great efforts by the industry to extend such success to the other 3D applications such as mobile and gaming or online 3D video. And generally, this agreed that a possible obstacle to the widespread of 3D services is related with the visual comfort issue. Indeed, the industry has dedicated their effort to address these concerns. However, the solution provided requires time-consuming and labor-intensive processing. As a result, the development of automatic tools to improve visual comfort has been gained this importance. So in stereoscopic 3D contents, excessive disparity magnitude and fast change of disparity have been known as key determinants of visual discomfort. As we know, the stereoscopic being induced accommodation versus conflict, and this conflict worsens with excessive disparity. Especially this conflict, as disparity may increase, the degree of conflict also increases so that more visual discomfort can be induced. In addition, fast change of disparity can also induce visual discomfort, possibly due to excessive demand on accommodation and versions of the linkage. So currently, disparity remapping has been suggested as a one-way to improve visual comfort of 3D contents. And in most cases, disparity remapping has been used to minimize discomfort by excessive disparity. So current one common approach is to target an internal disparity range into a comfortable range. So however, the problem is that even the same disparity is within the comfortable range, visual discomfort can be still induced. Right here. So for instance, like fast change of disparity or excessive disparity gradient or high spatial frequency, so we have many of additional content characters that can affect our visual comfort. So considering that the severity of visual discomfort is highly correlated with disparity magnitude, we can further decrease the same disparity range to minimize the negative effect those discomfort causes on visual comfort. However, in this case, the problem is that so by decreasing the disparity range of scene, we can minimize the, we can improve the visual comfort, but it can result in degradation in the depth quality and perceived disparity range, the perceived depth range. So the motivation of our work is to locally scale the disparity of local problematic reasons. So in this case, in this way, we can improve visual comfort while preserving the depth, perceived depth range. So the contribution of our paper is to, is a local disparity remapping that aims at reducing the visual comfort in this way, fast change of disparity in stereoscopic video. So to this purpose, we detect visual importance region, which have a dominant influence on our visual comfort and quantifies the level of visual discomfort in those regions. And then the local disparity remapping is generally to compress the local problematic reasons. This slide shows our framework of local disparity remapping. We first detect visual importance region from the in post-tearscopic video. And as aforementioned, visual importance region represents the reason where, which have a dominant influence on our visual comfort. And by analyzing the motion and disparity characteristics of those regions, we quantify the degree of visual discomfort in this by those regions. And local disparity remapping function is generated to compress the problematic disparity plane according to the severity of visual discomfort. So in order to detect visual discomfort region, we employed computational visual attention model. Especially, we used image motion and depth based image map to generate visual importance map. And this figure shows the example of the image motion, depth standard map of in post-tearscopic video. And considering that different modalities, and finally contribute visual comfort map, we combined this standard map using a linear combination with the conveyance. Then using a simple threshold segmentation, we obtained visual importance region from the input video. Then in order to quantify the visual discomfort in those regions, we measured the magnitude of in depth motion at each visual importance region for each frame. Then we can obtain disparity features for multiple reasons for each frame. So here we assume that the most problematic reason can dominant influence the overall visual comfort. So we took the maximum pulling strategy. So we chose the maximum feature value from each frame. Then to further quantify the visual discomfort level, we have seen individual disparity planes across different frames. So from we have multiple, especially for the feature values for several frames, and we divided that feature values according to their disparity of influence region. So here the QT represented average disparity of visual importance region where the special field motion feature was extracted. So we divide this feature value into the multiple sets, and among them we took the maximum value with an assumption of the winner-taker mechanism in temporal domain. So finally, using this information, we quantified the relative level of visual discomfort across different disparity planes in video by the normalization. This is a simple example to represent the process of visual discomfort quantification. So here in this video, the most foreground butterfly was detected as visual importance region which have very large disparity magnitude and the best change of depth of motion. So the first figure shows the special field motion feature value which approximate the magnitude of in-depth motion, and second figure shows the average disparity magnitude of visual importance region. So finally, the third figure shows the relative level of visual discomfort across different frames within a shot of the input video. We can see that since a first change of disparity occurs between the disparity range of 3 degrees to 3.5 degrees, the relative level of visual discomfort was very severe around this disparity values. So now we have the information of relative visual discomfort of the search curve video. So we use this information to compress the disparity of the search curve video. So the basic concept of the proposed method is to compress the local disparity planes according to the relative level of visual discomfort. So here the C is the information of the discomfort level we obtained in the previous slide. So we use this information to determine the amount of disparity compression at each disparity plane. So here this figure shows the example of disparity function that compress just one disparity plane. So here the x-axis represents the original disparity and y-axis represents the remap disparity. So here for disparity plane DI, the amount of disparity compression is determined by the CDI. So in this way we compress the each disparity plane according to the severe visual discomfort. And here the W, this parameter is a weight of factor that determines the absolute value that disparity to be compressed. So since each disparity plane has different amount of disparity to be compressed and compressing specific disparity planes affect the others, so it isn't necessary to determine the proper disparity value. Since we have the multiple choice for the remap disparity plane, in this case we took the minimum disparity value. So this is the example for just only close disparity. So the same the mechanism applied to the uncrossed disparity. So the reason why we took the minimum value for this interval is that we need to sufficiently compress the disparity of problematic disparity plane. And this is the final example of the disparity, local disparity mapping. So the left figure shows the relative level of visual discomfort and the right figure shows the local disparity mapping function we generated using the method previously explained. So we can see that disparity between 3 degree to 3.5 degree has been compressed. So to demonstrate the performance of the proposed local disparity mapping function, we used 18-sterescopic video which consists of 9 scenes with 2 different camera base lines. And this thumbnail shows an example of the star-screw video we used for our experiment. And these star-screw videos were generally to have very excessive disparity range and a vast change of disparity. So for our experiment, we applied two different disparity mapping methods. The first one was the global disparity mapping which linearly scales the disparity range of star-screw video. And the second one was combined global and local disparity mapping. So by comparing the subject's curve between these two sets, we proved that the additional local disparity mapping can improve visual comfort while preserving the naturalness of the scene. So to compress the disparity of the scene, it is necessary to determine the proper amount of disparity compression. So to this purpose, we used object visual comfort metric which is our previous work. So for global disparity mapping, we used a visual input space we saved for disparity magnitudes which utilize the disparity statistical visual importance region to predict the overall level of visual comfort of the input stereoscopic 3D contents. Then for the second video set, we applied the visual input space we saved for disparity magnitudes change. So according to the prediction result, the absolute amount of disparity compression is determined. Then we applied the proposed local disparity mapping method. This is a very simple example of the star-screw video after two different disparity mapping. The first one is global disparity mapping and the second one is combined global and local disparity mapping. So here you can see that after local disparity, here I record that the foreground butterfly is the visual input region in this video. So the local disparity mapping compress the most problematic region here, the butterfly. So you can see that disparity of this butterfly is decreased and while the disparity of background like tree is almost preserved. To demonstrate the local disparity mapping, we conducted our subjective assessment of visual comfort and naturalness. We used DSC-Case which randomly displayed two different versions of the same image. The one was the original image and the other was the process image by the global remapping or combined global and local disparity mapping. So we measured the visual comfort and naturalness using this score sheet. And for the analysis of the result, we converted this continuous scale from 0 to 10 and demo so as used for the analysis. This is the result of subject assessment result of visual comfort. And here the excess is represented the content index and why excess is represented the most. So in here the empty, the B ins represent the improvement of visual comfort obtained by global disparity mapping and the field bar represent the one obtained by the combined global and local approach. So we observed that both global and combined approach improved the visual comfort with statistical significance. And for global approach, average demo was 8.5 and for the combined approach, average demo was 11. And the difference of these two scores was statistically significant. And for naturalness, you observed that the overall improvement of naturalness was significant but we could not observe any difference between global and combined approach. So in this paper, we proposed the local disparity mapping to compress the local problematic display plan in the statistical video and to this purpose, we detect visual implications and quantify the level of discomfort induced by those reasons. And our subject assessment shows that the proper local disparity mapping can improve visual comfort while preserving the naturalness of scene. Okay. Okay, that's all. Thank you. Thank you. Maybe one quick question if you have any questions? So do I understand that by local, you mean local in depth disparity? Right. So you change some range of depth to some other range of depth? Yeah, right. So here local does not mean special information but depth information. So the local is in Z. Yeah, right. So please start to speak. Thank you.
|
The great success of the three-dimensional (3D) digital cinema industry has opened up a new era of 3D content services. While we have witnessed a rapid surge of stereoscopic 3D services, the issue of viewing safety remains a possible obstacle to the widespread deployment of such services. In this paper, we propose a novel disparity remapping method to reduce the visual discomfort induced by fast change in disparity. The proposed remapping approach selectively adjusts the disparities of the discomfort regions where the fast change in disparity occurs. To this purpose, the proposed approach detects visual importance regions in a stereoscopic 3D video, which may have dominant influence on visual comfort in video frames, and then locally adjust the disparity by taking into account the disparity changes in the visual importance regions. The experimental results demonstrate that the proposed approach to adjust local problematic regions can improve visual comfort while preserving naturalness of the scene. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30587 (DOI)
|
Thank you, Mr. Chairman. My name is Ken Chirabash from Tokyo University of Agriculture and Technology. The title of my talk is Multi-View Display Module using MEMS Projectors for an Auto-Rise Screen Autosystemic Display. Auto-Rise Screen Autosystemic Displays allow many viewers to observe 3D images simultaneously and also provide more realistic impressions. This photograph shows the titling 3D displays having large screens, but it is glassy-type. In this study, we propose a Multi-View Display Module using MEMS Projectors to achieve large-screen auto-risescopic display. The types of multi-projection systems have been developed. Superportion types proposed by Holographica and NICT consist of MEMS Projector LA and screen lens. This system has a simple structure. However, this system requires large system depths. A titling type proposed by Italy consists of Projector LA and light-controlling screens such as the lens ticker lens or park barrier. This system requires a small system depth. However, precise alignment and color matching among all projectors is required. In this study, as shown in this figure, we propose a Multi-View Display Module using MEMS Projectors which has a frame-length screen so that the number of modules are tied to provide a large screen. This slide shows a proposed Multi-View Display Module. The proposed module consists of MEMS Projector LA, a vertical diffuser, and a lens ticker lens. All MEMS Projectors have different horizontal positions and all projected images are superimposed on vertical diffuser. This module should produce a large number of viewpoints to allow many viewers in large viewing areas simultaneously. In our module, we use MEMS Projector because it has the following features. No projection lens, low energy consumption, and generation of dots can be flexibly altered. This slide shows a relative technique. It is a proposed combination of 2D lens LA and 2D projector LA. This system provides a full-parax but low-resolution. A full-flameless screen feature is not considered. In contrast, this study proposed a combination of 1D lens LA and MEMS Projector LA with modified 2D arrangement. This system provides a horizontal parax and high-resolution. Now, I'll explain the operating principle of proposed technique. This figure shows a horizontal sectional view. The lens ticker lens having double-range surfaces is located on the front of the vertical diffuser. This figure shows the classes of rays emitted from the three projectors located at different horizontal positions. Rays from projectors are converged by the cylindrical lenses on rear-range surface to generate light spot on front-range surface and then deflected by the rectangular lens on front-range surface. By placing the focal plane of front-range lenses on rear-range surface, all right spot emit rays in the same manner and work as 3D pixel. Therefore, the number of 3D pixels in each cylindrical lens is equal to that of the projectors. Next, I'll show you the vertical sectional view. Because rays are diffused vertically by vertical diffuser, the difference between the vertical positions of the projectors is practically eliminated. So, this area becomes common viewing area. Therefore, as showing these figures, even those projectors have different vertical positions, the 3D pixels are aligned in horizontal direction. As showing this equation, horizontal resolution is equal to the number of lenses times the number of projectors. So, the horizontal resolution can be increased by increasing the number of projectors. This slide explains the dot generation scheme of MEMS Projector. As shown in this figure, by properly modulating the laser, the horizontal position of dot can be changed among the horizontal scan lines corresponding to one 3D pixel. Rays from dot are different horizontal positions reflected by the cylindrical lens to generate different viewpoints. So, the number of viewpoints can be increased by increasing the number of horizontal scan lines. This slide shows advantages of proposed system. A frameless screen can be realized. The image distortion of the MEMS Projectors does not affect the positions of the 3D pixels. The alignment of the MEMS Projectors affects the projects of the 3D pixels. The image distortion of the MEMS Projectors is less complicated and can be easily corrected electronically. Now, I explained the experimental system we constructed in this study. The MEMS Projector we used was a SHOWX Plus laser-pico projector, which are provided by Microvision Incorporated for commercial products. We used four projectors. The size was 26 mm high, 77 mm wide, 145 mm long. The resolution was 848 x 480. The frame rate was 60 Hz. The web length of RG and V were shown in this table. However, this MEMS Projector was a commercial product, so the laser modulation scheme was fixed. Therefore, a slanted slit array was placed on the scanning plane to obtain the slanted dot alignment. This photograph shows the slanted slit array. It was fabricated by forming an margin mask on a glass plate. This slide explains the screen configuration. The number of dots generated by MEMS Projectors was 640 x 480. However, as shown in this slide, because the scan angle of the projector was limited, so four images were partially superimposed. Therefore, the left and right areas with a width of 59 mm were covered with only three projectors. This photograph shows the lenticular lens. Lens pitch was 7.55 mm, focal length was 11.4 mm, and the number of lenses was 40. Two identical lenticular lenses were combined to obtain double-lensical surfaces. Now, I'll explain the design of the experimental system. Horizontal resolution of the system is given by the number of lenses times the number of projectors, so it was 160. Four scan lines corresponding to one 3D pixel in this system. So, vertical resolution of the system is given by the number of vertical dots generated by MEMS Projectors divided by the number of scan lines, so it was 120. And the number of views of the system is given by the number of horizontal dots generated by MEMS Projectors divided by the number of lenses times the number of scan lines, so it was 64. This is the experimental system we constructed. The screen display consists of 23-liter lay, a vertical diffuser, and two lenticular lenses. They were fixed in the aluminum frame. We did not construct a frame screen because the aim of this study is to verify the proposed technique. The specifications are shown in this table. Screen size was 302 by 206 square millimeter. That was 14.4 inches. The resolution was 160 by 120. The number of views was 64. The viewing angle was 36.6 degrees, and the system depth was 460 millimeter. We will demonstrate our system at simple zoom demonstration session tomorrow night. If you are looking for a better result, if you are interested in this system, please join us. Next, I showed an experimental result. In order to the number of 3D pixels can be increased, we captured the 3D pixels by increasing the number of projectors. I played the movie. One projector, two projectors, three projectors, and four projectors. Enlarged view. One projector, two projectors, three projectors, and four projectors. As you can see, the number of projectors increased. That of 3D pixels is increased. Next, I chose this movie. This movie shows a generated 3D image. As you can see, the 3D image has smooth motion power acts. From experimental result, the increase in the number of 3D pixels by use of multiple projectors was confirmed. And the 3D images has smooth motion power acts. Next, I showed the discussion. First, I talked about the speckle issue. The generation of speckle was less obvious because the light laser is diffused only in the vertical direction and is not diffused in horizontal direction. Moreover, there is no interference between lights from different memes projectors. Second, I talked about the right intensity. The generated 3D images were not as dark as we thought because the light is diffused only in the vertical direction and multiple projectors were used. Third, I talked about vertical black lines. The vertical black lines appeared in the 3D image. Two reasons can be considered. First, the left and right areas of display screen were covered with only 3D projectors. So, there were only 3D pixels in each cylindrical lens. Second, I showed in this figure the lens pitch of front-lengthed surfaces should be slightly larger than that of the rear-lengthed surface. However, we used identical-lengthed lenses on both surfaces. Now I conclude my talk. In this study, the multiple display module using a memes projector LA was proposed to realize large-screen 3D display. The experimental display system was constructed using 4 memes projectors. The module had a 3D resolution of 160 by 120 and it provided 64 views and the screen size was 40.4 inches. The generation of the 3D pixels was confirmed and the 3D images had smooth motion products. Thank you very much for attention. Thank you, Kenji, for that interesting talk. Can I ask you to put up the slide 5 or 6 that showed the system? Are there any questions for the author? Yes, it will be in the demonstration session, which I think is tomorrow night. Maybe next slide? The ray trace. Can you show the ray trace? No, back. Keep going. That. Yes. Are there any questions for the author? Yes, Mary Lou. Hi. Thanks. This is great talk. It's fantastic. Can you talk about making the lenticulars that have... They're not uniform back to front. Did you make them yourself or can you talk about where you got them? Or is it secret? Are the lenticular arrays custom or did you buy them? A commercial product. Who made them? No, you bought them. This is the point. That's the lenticular arrays from a catalog? Edmund. Edmund. Edmund optics, okay. Edmund optics, ladies and gentlemen. Interesting. Yes, everyone call Edmund right away. Any other questions? There's a follow up. There are different spatial frequencies, maybe a little bit late. The second to last slide. What are the different spatial frequencies of what? On the pitch of the lenticulars, I thought. Some more. Another. Yeah, there, I thought the slightly larger pitch. Yeah, what does that mean? On the front surface, compared to rear surface, slightly smaller. So that was still off the shelf lenticular in the one depicted at the bottom of that slide. Why? Yes, Takaki-san, do you hear the question? Can you talk about, so there are two lens arrays? Okay. Okay, so I guess it's hypothetical. So it sounds like they would like to use two different lens arrays, one of which has a different radius or pitch, but today pitch, but today they're the same. Correct. This is like a crowdsourced talk, this is great. Okay, I think we have time for one more question. All right, we're good. So we look forward to seeing your demonstration tomorrow night, thank you very much. Thank you.
|
A multi-view display module using microelectromechanical system (MEMS) projectors is proposed to realize ultra-large screen autostereoscopic displays. The module consists of an array of MEMS projectors, a vertical diffuser, and a lenticular lens. All MEMS projectors having different horizontal positions project images that are superimposed on the vertical diffuser. Each cylindrical lens constituting the lenticular lens generates multiple three-dimensional (3D) pixels at different horizontal positions near its focal plane. Because the 3D pixel is an image of a micro-mirror of the MEMS projector, the number of 3D pixels in each lens is equal to the number of MEMS projectors. Therefore, the horizontal resolution of the module can be increased using more projectors. By properly modulating lasers in the MEMS projector, the horizontal positions of dots constituting a projected image can be altered at different horizontal scan lines. By increasing the number of scan lines corresponding to one 3D pixel, the number of views can be increased. Because the module has a frameless screen, a number of modules can be arranged two-dimensionally to obtain a large screen. The prototype module was constructed using four MEMS projectors. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30591 (DOI)
|
Thank you Mr. Chairman for that introduction. My name is Raymond Fan. I'm a Ph.D. candidate at Ryerson University in Toronto, Canada and I'm just here to present my topic on converting 2D material into stereoscopic 3D using a different kind of method which is semi-automated. So just to quick outline my presentation, just an introduction and motivation why we're actually taking a look at a different approach to converting 2D content into stereoscopic 3D and then we're going to talk about the conversion framework for images first. We're basically combining two semi-automated segmentation algorithms into one cohesive approach using random walks and graph cuts. I'll give a little bit of detail regarding what those are but more details can be seen in the paper and then we're going to take a look at video conversion which is just an extension of converting into images. There are a couple of issues that we need to take a look at for example key frames for labeling is also label tracking too. We'll take a look at that as we go and then I'm going to take a look at some results. Unfortunately I read the guidelines a little too late. I didn't know that there was a process to convert video into stereoscopic in order to view on these projectors. So I've got for you the depth maps and hopefully that'll be enough to assess the quality of our work and then we'll just wrap up with some conclusions. Okay so 2D to 3D conversion as you already know is taking legacy content or single-view footage and turning it into stereoscopic or left and right views. There's a huge surge in popularity especially if you go to the cinemas and you watch films like Batman Begins or Superman Returns and the current accepted method is unfortunately very labor intensive and difficult but it's actually quite accurate and it's known as rotoscoping. IMAX is a big proprietor of this method. Basically what you do is you take a look at a shot or a scene and you manually extract out objects of interest and you manually displace them into left and right views using in painting another image filling techniques to fill in the holes in order to make a more realistic experience. There's been a lot of research in the 2D to 3D conversion in order to alleviate difficulty to minimize time and it costs lower. So the goal of 2D to 3D conversion of course is to create what is known as a depth map. It's basically an image that's the same size as the frame that you want to convert and it's a monochromonic or black and white image where each point tells you the depth that you should experience at that particular point. So black means that the point is very close while white means that the point is very far and in between are relative depths of gray. And depth maps are basically used as the main tool for this kind of conversion. Now the ultimate goal obviously with a lot of software packages is to do this on a more automatic basis where you just put in a video sequence and you maybe change a couple of parameters and then you let it go and convert. Unfortunately if you want to do this kind of conversion errors can't be easily corrected and you might have to do a lot of pre- or post processing in order to get it to the way that you want. So a good solution here is to do what's known as a semi-automatic approach which is a happy medium in between automatic and manual methods. So what I mean by semi-automatic is that you give the system a couple of hints what you believe the best depth is for particular regions or particular areas of the image and then you let the algorithm figure out the rest of the depths for you on an automatic basis. So we are allowing the user to brush what they believe is close or what they believe is far and then you figure out the rest of the depths on your own. In the case of video you have to mark several key frames not just one frame. So you have to go through the video sequence and you have to figure out manually what you believe the best depths are and then you let the algorithm solve it from there. But we also allow the user to use a computer vision tracking algorithm. All you have to do is mark one frame and then you allow the algorithm to track through the rest of the sequence automatically figuring out what the labels are for the rest of the sequence. So how do we do this? We just merge two semi-automatic segmentation algorithms together which are based on graph cuts and random walks. So random walks is just an energy minimization scheme. So what it is is that you have a user defined label and you have to figure out the probability of a random walker starting from that label and walking over to the rest of the unlabeled pixels in the image. And your goal is to classify every single pixel to belong to one of k possible labels and you figure out what the highest probability is for all those labels and you just choose the highest one. So what we want to do here is we want to modify random walks to create depth maps and it's pretty simple. All you have to do is just take the user defined depths and map them to the same span of values which are based on the probabilities which is just between 0 and 1. And the goal is just to solve for one label which is just the depth. So we use a modification of random walks which is known as scale space random walks. So what you're doing is you're just applying random walks on a pure model scheme. You've decomposed your image into multiple scales. You apply the algorithm on all the scales and then you merge them by the geometric mean. So what we're doing here is we let the user choose what they believe is the best depth for certain areas. You span it between 0 and 1. 0 is either a dark intensity or a dark color and 1 is a light intensity or a light color and then you can vary in between. Your brush areas of the image that you believe is close so far and then you let the algorithm solve it for you. You're probably wondering well how does the user know what the best depth is at a particular point in your image? Well as long as you're perceptually consistent it doesn't matter the perception will be fine. There was a psych study done at the University of Tel Aviv in Israel that proves that as long as you're perceptually consistent you'll be able to see good results. So random walks does have its issues though. So this is a quick example. You've got a picture on the left which is just a picture of the Ryerson campus and then in the middle you have user defined labels where yellow and red denote far and white denotes close. So this took about a couple of seconds to brush and when you run through the algorithm you get something on the right. So you've got some good internal depth variation which is good and minimizes the cardboard cutout effect but if you take a look especially along weak edges you'll see that regions of one depth bleed into another and that's not really good. We want to be able to have objects have a distinct depth without bleeding into other regions. So the internal depth variation is good and minimizes your cardboard cutout effect but we need to respect object boundaries and that's where we decide to merge it with another segmentation algorithm which is known as graph cuts. So graph cuts is pretty much a hard segmentation where you provide you know a certain number of labels and then the output will only belong to one of those possible labels. Well with random walks you're able to generate depth maps with values that you did not manually specify in the beginning. So what graph cuts does is just a very general thing is it's a computer vision problem that solves the maximum a posteriori Markov random field problem with user labels. So you just basically consider an image as a way to connect a graph. You solve it using the Maxwell-Minkov problem of that graph and that pretty much gives you what your solution is. So as we talked about if you think about it making depth maps is kind of like a segmentation problem. So you want to do what you want to do is you want to go through each pixel and you want to label each pixel corresponding to one of k possible labels and each label is a particular depth. Alright but unfortunately graph cuts is a binary segmentation problem. You can only classify between zero and one. So there's got to be some way that you can modify this so that you're accommodating for multiple labels. So what you're doing is you have each user to find label in your image you give it you know an integer label you know between one and n where n is the number of labels that you've got and then you perform a binary segmentation on each of these labels. So you go through the first label you set that as foreground you let everything else become background you solve it and then you go through the next one and then you keep repeating and then for each solution you have a maximum flow value and then you figure out which which of the results give you the highest flow and that particular pixel will give you the label that you want. Okay but there are some problems here of course you've got good object boundaries but there's no internal depth variation so it's good if you want to be able to differentiate between the objects but if you were to try to perceive this on stereoscopic hardware it's going to look like cardboard cutouts and that's not good. So we can make use of this we can merge random walks which has good internal depth variation but it doesn't really respect any of the edges and graph cuts which does respect the edges but no internal variation so if we decide to merge these two things together you get the best of both worlds you get you minimize the cardboard cutout and you also respect object boundaries very well. So before we merge if you you probably notice that graph cuts as integer labels and random walks has floating point labels so what you're doing here is when you're brushing over the image you're going to first use floating point labels and then you convert that to a integer lookup table so each unique depth that you got you map that to an integer set you solve using graph cuts and then you map that back to a floating point so you can merge them together so you're pretty much comparing apples with apples. So just the quick summary of a method you place users to find strokes on the image and then you create what's known as a depth prior which is you know your graph cuts and then you feed this graph cuts information into your random walks as an additional piece of information and then if you're not satisfied with the results you can just modify you know the the strokes and then you can rerun the algorithm that will. Okay so this is our final first example you've taken a look you have the depth map for random walks on the left you've got graph cuts in the middle and then you have the final modified version on the right that combines both of the things together. So as you can see here the result on the right is a good compromise between the two so I'm going to quickly go through video then I go through some results unfortunately running out of time but I'll try to go through this as fast as I can. So we're going to move on to video so you're pretty much what you have to do is you have to mark more than one image or one key frame so you can't just get away with marking one frame unless you're using the modified thing that we're going to take a look at but we'll talk about that later but what we're going to do is we're going to assume that there are no above changes in video so if you have multiple shots it's up to you to decompose it manually or use a shot detection algorithm to figure it out for you but the result will be basically a sequence of depth maps one per frame. Okay we also have to be aware of memory constraints it's you know it would make sense to process each frame individually but you're not considering any of the temporal relationship that'll be broken so you're going to get some flickering if you try to process it individually it would bode well with memory and you could do it in parallel but that may not work. It also may be prohibited if you want to try to process it all in memory at once it'll just exhaust all your available memory so what we're going to do is we're going to do block processing so what that is is that you take overlapping blocks you know so for each frame that you've got you have you know plus or minus some blocks you have a five frame block per frame and then you solve the algorithm for each block and then you just use the center frame as the final output so we don't use any of the overlapping blocks but that might be used you know in future work for some initialization for another part of the algorithm. Okay so here's the question how many frames do we label? Well we allow the user to manually choose the kind of labels that you want but if you just choose a small number of frames that might create depth artifacts. Here's a quick example if you take a look at the very first row this comes to the Cintel sequence it's a 30 frame sequence and you see that the user just labeled three frames and in the middle row this is what happens if you try to convert the video using just the three frames and if you take a look at the it starts from left to right at the very in the middle you'll see that the objects kind of fade in depth and that's not what we want but if you take a look at the bottom this is what happens when you label all the frames and you see that the depth is pretty consistent so the goal here is to try to label as many frames as possible so labeling all frames is better not doing this results in depth artifacts essentially you'll see the points that quickly fade in depth so it's pretty it's manually taxing to try to go and you know manually label all the frames in your sequence so what we want to do is we're going to use the label tracking algorithm or all you have to do is just label one frame and you let a computer vision algorithm track the labels through the rest of the frame so it's as if the user did label the frames but you're letting the computer do that for you so what you're doing is you're going to decompose a stroke into end points they track each of those points individually and then you piece them all together using spline interpolation so it's pretty taxing if you wanted to try to track all the points in the stroke so we just decompose it into just a small set of points okay the tracker algorithm that we use is the tld it's tracking learning detection it's based by kalau and all you have to do is just draw a bounding box around the object that you want to track and then you let that track throughout the rest of the sequence so all you're doing is that you're just putting a bounding box along each point in your stroke and then you're just letting it track and then you just decompose the each of the points into a stroke what we do here is we automatically adjust the depth in case the object you know moves in and out of the you know parallel to the camera unfortunately running out of time so if you want to figure out how do we automatically adjust the depth just go ahead and take a look at the paper for more details okay so here are some results just a couple of examples so on the top you have a frame from the avatar trailer so this is just this is unobtainium floating on a levitation pad user took about maybe 10 seconds to mark this frame and then the result is on the right so you see here that it's it's pretty consistent you've got the rock and the levitation pad you've got it coming to the front and the rest of it is slowly varying to the background and then you have a couple of shots here of downtown boston if you take a look here the one on the left only took about five seconds to mark the algorithm took maybe a few seconds to run and then you see boston looking like that and that's actually pretty good and the same shot on the right there's only about four strokes in total you run the algorithm and then that's pretty much what you get okay so we have a couple more examples here this is an example of big buck bunny it's a sequence it's well known what we want to do here is we just want to track the first frame you draw strokes along the first frame and then you let the computer vision algorithm track the rest of the frame so the very first frame is at the top we just drew one set of strokes and then the rest of the frames on the bottom are the computer vision algorithm tracking the rest of the strokes and then you can see here it's actually pretty good especially with the object on the front that's moving towards the background okay here what the depth maps look like pretty consistent as you can see here the very first result is at the top you know you're measuring you know the depth maps are pretty good and then this is what happens with all the depth maps throughout the sequence using the labels from the computer vision tracker and you can see here that it's pretty consistent as well and last but not least we have the shelf for a partisan sequence you have the user defined frame at the very top we're using this because you have a formula one card that is starting from the very front and then it moves towards to the background and we did this so you can see that the depth is being adjusted as the as the car moves so if you take a look at the very top you have some you have some strokes that are relatively close and as you see the car move towards the background you see that the depth starts to decrease or increase rather and that's pretty much what you want and those are the depth maps too all right so in conclusion we made a semi-automatic method for 2d to 3d conversion automatic requires error correction and pre or post processing manual is time consuming expensive so we decided to merge the two together to create a better algorithm and it's you know it's you can you allow the user to correct errors and you can run the algorithm fast works for both images and video we just merged the two segmentation algorithms together and for video we just modified our computer vision tracking all of them to track the rest of the strokes throughout the frame and i'm done do i have time for questions there okay thank you thank you i'll entertain any questions you have some questions or comments i was just wondering what is the computation time to calculate let's say one frame using your method it totally depends on the resolution we take into account the time it takes for the user to mark the frame which can perhaps maybe five to ten seconds for an experienced user or maybe 10 to 20 seconds for a non-experienced user and the computation time is very fast for a low vga resolution frame takes about a couple of seconds for for something that's a little more high depth it takes a little longer maybe five to ten seconds for frame so but the hi Brian Cullen from university of war do and i was just wondering and there was a wall in one of the images at the beginning and the top of the wall there seems to be problems there i was just wondering if tagging in terms of top side and front would help as well or would be oh definitely yeah that that that practice is definitely seen um that thing is definitely seen in practice and i haven't actually implemented that here but that would definitely help in terms of uh you know the depth gradients and all that but yeah that's that's a good idea thanks for suggesting that what do you mean by random walk just the unpredictability of some object moving in the scene um i the theoretical basis you can think of random walks as um turning an image into uh a circuit where each pixel is a node and there are graph weights which practice resistors and what you're doing is you're trying to find the right voltage across each of the nodes given source voltages which are your uh user defined labels so you're trying to find the best voltage to map through all of the uh pixel and those voltages are essentially used as probabilities that you can use for depths so you can extract it to a circle i know it is a circle problem and it's just a theoretical approach i'm not quite sure i'm not quite sure how uh how to extract it in terms of mathematical point of view but i know it is in in terms of a circuit point of view okay please stand okay and uh possible to be sad uh people uh the speaker is dr patrick bandwell uh from philips research and the title of his paper is a temporary consistent disparity estimation using pca dual cross uh by rat runway
|
We create a system for semi-automatically converting unconstrained 2D images and videos into stereoscopic 3D. Current efforts are done automatically or manually by rotoscopers. The former prohibits user intervention, or error correction, while the latter is time consuming, requiring a large staff. Semi-automatic mixes the two, allowing for faster and accurate conversion, while decreasing time to release 3D content. User-defined strokes for the image, or over several keyframes, corresponding to a rough estimate of the scene depths are defined. After, the rest of the depths are found, creating depth maps to generate stereoscopic 3D content, and Depth Image Based Rendering is employed to generate the artificial views. Here, depth map estimation can be considered as a multi-label segmentation problem, where each class is a depth value. Optionally, for video, only the first frame can be labelled, and the strokes are propagated using a modified robust tracking algorithm. Our work combines the merits of two respected segmentation algorithms: Graph Cuts and Random Walks. The diffusion of depths from Random Walks, combined with the edge preserving properties from Graph Cuts is employed to create the best results possible. Results demonstrate good quality stereoscopic images and videos with minimal effort. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30594 (DOI)
|
Thank you very much. So rather like quite similar to last talk, what we're interested in studying is the influence in 3D presentations when you have 3D, the Client Stereoscopic 3D with 3D audio technologies. So what we're really interested in is the implications for how you create 3D content when you have to consider both audio and 3D video. So like the previous presenters stated that there is this known modal dominance of visual cues over audio cues. So things like ventriloquism whereby vision can alter the perception of where sound source comes from. And what this indicates is this potential for a zone of congruence whereby you could have some small mismatches between audio and visual cues or audio visual stimuli positions. So what we want to do is to quantify that congruence for Stereo23D video in a home environment or in a cinema environment. So what we did in these experiments was just basically to estimate the zone of congruence for audio and visual stimuli positions along the depth axis. So how far apart could those sources be separated while still being perceived as coming from the same position? Okay, so basically we conducted two tests and so we tried to just basically overarching point of view to see how that affected the zone of audio and visual congruence. So in the first experiment we were looking at determining the zone of congruence with respect to visual depth. So you have a loudspeaker which is representing your visual source of your sound and you have a very position of loudspeaker and you want to measure the zone of congruence in each case. The second point or second factor we considered was the familiarity of the audio source. So we have a pick noise source which represents an unfamiliar sound source and then speech is representing a familiar sound source which may have additional depth cues in it that would affect a perception of depth. And in the second experiment we considered other factors including the visual stimulus type. So if you consider you use a static visual stimulus like a loudspeaker or whether you have a human voicing the sounds, does that make a difference? And finally the test environment, really testing there is reverberation, the amount of reverberation in your room and how does that affect those kind of judgments? Okay, so for each of the two tests, so before I describe the experiments themselves, I just want to talk about the capture and display methodologies we use for both audio and video. So for the two experiments that we did we actually adopted two different capture methodologies and the reason why is because that test two uses moving stimuli and that meant that we had to adjust. So for the static stimuli which are only in test one, we would just simply capture the stereo 3D by scanning a camera using a linear positioner at a controlled speed, sort of along an axis perpendicular to the depth axis and that would allow us then to just generate the stereo 3D pair by just extracting appropriate frames in the video. The advantage of that methodology was that it allowed us basically to vary the IPD so we could change or sorry vary the interaction distance to match the inter-privileged distance of the test subject. Whereby for the second experiment we just used a conventional stereo rig again just because we couldn't capture moving stimuli with the first methodology. But unfortunately we lose the ability to vary the IPD but we set the camera separation to be just the average eye separation for humans. So when getting the depth use, basically what we want to do is we want to make sure that we have some calibration between the position of the audio source and the position of video sources. So from a visual point of view what we try to do is generate as close as possible to orthostereoscopic conditions. So the first condition is to match the camera baseline with the IPD which is what I described in the previous slide. Also you have to adjust the field of view of the camera to match the extent of the display or the expended display would be in the viewer's field of vision. So we typically shot with a wider field of view than we needed and we just cropped the image by the factor you can see there and basically all those kind of values are known. So you can work out the correct ratio, the image resolution and to work out the correct image size. And then finally then we have to adjust the convergence point of the screen depth and that's just by applying a simple horizontal image translation to each image. So now onto the audio next. So basically what we want to do is to simulate sort of a home listening environment. But we kind of adopted a kind of a novel approach doing that. So what we were trying to do basically is to render, instead of rendering over a loudspeaker array like 5.1 array like the previous speaker was talking about, we used a virtual loudspeaker array developed over headphones. Now there's a number of aspects to this procedure. So it's going to describe each one in term. The first one is the actual format we use to store the 3D sound fields. So we used a first order ambisonics approach represented 3D sound fields. So the basis of ambisonics is that you represent the 3D sound field as a pressure field centered on the point surrounding the listener. And basically you decompose that sound field into weighted sum of spherical harmonics basis functions. So the first order ambisonics we're using, zero and first order spherical harmonics. And the main benefit of using ambisonics is it allows you to decouple the choice of loudspeaker array from the way you capture the data. So you can decide your loudspeaker array as opposed, you know, after you've captured all data which is kind of nice. And secondly, it allows you to easily rotate the sound field by just simply rotating spherical harmonic spaces functions. So that will come in useful later on when I go through the whole system. So the second aspect is how actually the data is captured itself. So instead of actually recording the audio stimuli live in the room, the approach that we adopted was to record the audio stimuli in a studio and then basically to insert all relevant audio debt cues by convolving the audio signal with a spatial room impulse response which essentially modifies all the debt cues that are pertained to the particular test environment. Again, the main reason for adopting this approach is it allows greater efficiency in terms of creating the audio stimuli for all the different test environments that we have and the different sort of audio source distances so we can just generate the correct audio files for different audio stimuli much quicker. And it's also less prone to environmental noise when you're actually recording the data. And then finally then is the virtual loudspeaker rendering. So at this stage, you know, you have two choices. You could just adopt my kind of conventional loudspeaker array approach. And because we've used Ambisonics, we can choose any loudspeaker configuration basically we want. In this case, we're just going for an octagonal loudspeaker array centered on the listener, on the user. But instead of using a real array, we're using a virtual array. So it's essentially trying to simulate the same sound field but over headphones rather than over a real array. And the challenge is to simulate the effect of the head and your shoulders and your pinnacle on the audio signal. And that varies according to the speaker position. So that's captured basically by what's known as a head related impulse response or HR IR. And essentially for each loudspeaker position to get the virtual signal what you have to do is to involve what would be the loudspeaker output or the mono source with these two HR IRs for the left and right ear to generate signal for the left and right ear. Then you basically repeat the process for every speaker in the array and that allows you then to get the virtual 3D sound field displayed overhead once. Now you might think that this, well, let's just say that this is a maybe a strange approach but we've done a lot of experiments previously to show that this type of approach to rendering sound fields is perceptually equivalent to real sound sources. So here is an experiment which is a kind of a typical distance judgment experiment that you would see in the audio domain where you have to judge the distance of a sound source and we're just comparing the various different methodologies. So the two lines in the graph that are relevant are the black and blue line which the black line representing having an actual loudspeaker specific distance and you have to judge how far away that is and the set of blue line is the approach we were adopting in this paper, the first order amysoics approach and we can show that within statistical significance bounds that they are perceptually equivalent. So the key thing that makes this work is head tracking. So head tracking is required essentially to create stable 3D sound field. So if I'm listening to 3D audio overhead once I rotate my head the sound field shouldn't rotate with me. This sound field should stay in the same position. So head tracking really is the thing that makes this work in practice. So the experiments themselves. So the first experiment like I said before we're comparing visual distance and stimuli and we're comparing the zone of congruence. So we have three visual distances and we have 14 audio distances. So the same audio distances are used for each of the three visual distances. We have two audio stimuli and basically all combinations of audio and visual stimuli are presented to the listener once. So we had 20 test subjects and they were basically placed in a darkened room. They were told to sit two meters away from the screen. It was a polarized display in a darkened room and they were asked basically they were allowed to present it with the UI you see at the bottom where they were able to play a sound repeatedly and they were asked to judge whether the sound came from in front of the loudspeaker at the loudspeaker or behind the loudspeaker. So here are the results of the first experiment. So this first slide now is just comparing the familiarity of the sound source and the zone of congruence. So let me just explain the graphs and it's the same convention for all the graphs to see later on. So we have the green line just basically represents a portion of people who think that the audio and visual source come from the same position. The red line is the percentage of people who think that the audio is in front of the video, of the visual stimulus and the blue line is the proportion of people who think it's behind. So you can see that the graphs follow what you would expect the pattern to be. But if you're talking about the zone of congruence, we've kind of defined it here as the range of depths for which the green value is highest, including the error bounds as well. So if there's a large crossover between the two error bounds, we don't count it as part of the zone of congruence. So for this slide where we're just comparing for one visual distance, the difference between the pink noise, audio stimulus and the female speech stimulus, you can see that we don't really detect any significantly difference in the zone of congruence. So it's pretty much the same. So from somewhere closer than one meter, we could reasonably hypothesize that would be greater than zero is the closest that the zone of congruence could be started at. And it goes back to something like two and a half meters, which is half a meter behind the position of the stimulus. Now, we repeat this experiment again for the other visual distances and we just conclude the same results at each distance that there is no statistical difference between the two sound sources. So moving on from that down, we're now comparing visual distance. And I suppose we have two, four and eight meters and like you would expect, the zone of congruence increases as the visual distance increases. So again, you might have a range of say two and a half meters maximum, say for the two meter visual distance that increases three and a half meters at four meters and at eight meters is somewhere over five meters, which is probably significant, be higher. So we were kind of limited in that scenario by the size of the room. We actually couldn't go back much further than 11 meters. That was just the back wall of the room. So in the second test of our experiment then, we are comparing the zone of congruence with respect to the, all right, okay, with respect to the test environment and whether we use the speaker, the speaker, female. And really hear what we're talking about. The test environment one is just has more reverberance in it. So reverberance is shown to be quite a strong depth view and it could effectively alter perception. So I'm just going to quickly summarize over the results. Basically, although we saw some slight variations in the results, I don't think we were convinced that there was actually any statistical differences in the zone of congruence across all the different combination of factors. In fact, we have known, I mean some known inaccuracies in the way we captured the data, which could explain some of the difference that are apparent in that graph. So our conclusion there basically is that there is no significant difference. So finally then, just include, there's definitely a clear link between the zone of congruence and the visual depth, the size of the zone of congruence and the visual depth. But from all the other factors that we test, we couldn't really detect significant differences. So for future work then, we would like to refine our capture procedures just to get more robustness in them and to get more data by testing on more subjects. And also then I guess finally we might like to determine how do large errors or perceivable errors in audio and video position affect people's perception of quality and immersiveness. So thank you very much for your attention. I'd be glad to answer any questions if you have time. Okay. Thank you very much. Questions for David? All right. Well, thank you very much. All right. Thank you.
|
In this paper we undertook perceptual experiments to determine the allowed differences in depth between audio and visual stimuli in stereoscopic-3D environments while being perceived as congruent. We also investigated whether the nature of the environment and stimuli affects the perception of congruence. This was achieved by creating an audio-visual environment consisting of a photorealistic visual environment captured by a camera under orthostereoscopic conditions and a virtual audio environment generated by measuring the acoustic properties of the real environment. The visual environment consisted of a room with a loudspeaker or person forming the visual stimulus and was presented to the viewer using a passive stereoscopic display. Pink noise samples and female speech were used as audio stimuli which were presented over headphones using binaural renderings. The stimuli were generated at different depths from the viewer and the viewer was asked to determine whether the audio stimulus was nearer, further away or at the same depth as the visual stimulus. From our experiments it is shown that there is a significant range of depth differences for which audio and visual stimuli are perceived as congruent. Furthermore, this range increases as the depth of the visual stimulus increases. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30596 (DOI)
|
私は日本の大学生です。私の話は、3Dの映像とエモジナルな表現を覚えています。そのため、研究の動きは問題です。私は私の話をしています。私は私の話をしています。私は私の話をしています。問題です。この3Dの映像はかなり多くの問題です。この話は明日の討論をしています。しかし、私の研究の動きを同じです。この問題の答えはレスパーティングのつなげるの解説です。この問題の解説はブロジェンスコンフリックスのつなげるのためにコンフォートを解説するために2本のバラエクスペレンスを作りましょう。しかし、私はクリエイティブのつなげるのです。そのため、バラエクスペレンスをスリーズでスリーズのキャラクターリスティックをコンフォートで説明します。スリーズの調整を行います。4つのスリーズのスリーズの20世紀のクラウディーでスリーズのミッドポールのチャンスをスリーズでスリーズでスリーズのキャラクターリスティックをコンフォートでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズのディスパリティーをスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズでスリーズにスリーズでスリーズでスリーズでスリーズで全ての映像を見ると、Burray 3Dバージョンの家庭を作っています。このパーティーはパラクトの角度で、3枚のパラクトの角度で50mmの角度を取り組むことができます。一枚のパラクトの角度を取り組むことができます。このパラクトの角度を取り組むことができます。このパラクトの角度を取り組むことができます。研究中、90%のパラクトの角度の減量は、クロスの減量で、10%のクロスの減量で、クロスの減量で、このフィギュアは、アバターの減量の減量の結果を選択します。このフィギュアは10秒の減量で、バータカーの減量を選択します。パラクトの角度で、ホリゾントの角度で、このフィギュアは、1度のパラクトの角度で、このフィギュアは、2時間の間に、クロスの減量を選択します。このフィギュアは、スリーリースペースを使用しています。このフィギュアは、クロスの減量で、クロスの減量を選択します。クロスの減量で、クロスの減量を選択します。クロスの減量で、クロスの減量を選択します。クロスの減量を選択します。このフィギュアは、クロスの減量で、クロスの減量を選択します。このフィギュアは、全てのフィギュアを選択します。このフィギュアは、クロスの減量で、クロスの減量を選択します。このフィギュアは、クロスの減量で、クロスの減量を選択します。このフィギュアは、クロスの減量で、クロスの減量でクロスの減量を選択します。クロスの減量で、クロスの減量を選択します。このフィギュアは、トーンウォードの結果を負けました。このフィギュアは、クロスの減量でクロスの減量でクロスの減量を選択しています。クロスの減量で personally惨� young tennis allowingandchickted3D スペースは 同じ クロス スパリティーが 使用されている再び サイクルの マシモンバルの クリスと デクリスが 相当 短い次に スパリティーの アナリシスを 紹介しますエモーショナルシーンは スペースの キャラフターの 中で アナリシスを 作っていることを 気に入って アメリカの シーンを 作っているこのような シーンも 自然に オッカーの シーンに 行っているしかし マイナクロの シーンは エモーショナルシーンを 作っているもっと アナリシスの シーンに アメリカの シーンを 作っているそのため スペースの アナリシスの シーンに アメリカの シーンを 作っている特に スパリティーの キャラフタリスティックの 特徴を 目指していますこの映像は エモーショナルシーンの スパリティーの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンに アメリカの シーンを 作っているこの映像は マイナクロの シーンに アメリカの シーンに アメリカの シーンを 作っているそのため 基本のアナリスを 作っていきましょうまず 109映像の シーンを 作っているこのパレータの モデリカの シーンを 作っている3Dの シーンを 作っている3Dの シーンを 作っているこの シーンを 作っている 基本のアナリスを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンの マイナクロの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っているこの シーンを 作っている
|
The authors have analyzed binocular disparity included in stereoscopic (3D) images from the perspective of producing depth sensation. This paper described the disparity analysis conducted by the authors for well-known 3D movies. Two types of disparity analysis were performed; full-length analysis of four 3D movies and analysis of emotional scenes from them. This paper reports an overview of the authors’ approaches and the results obtained from their analysis. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30597 (DOI)
|
Thank you. Good morning, everyone. This work grew out from a collaboration between the 3D Imagineer and Display Laboratory from the University of Valencia in Spain and the realistic 3D group from the Midsuiton University in Sweden. I will start my talk making a brief classification of the various methods that exist to generate 3D images. Later, I'll describe in more detail, introlimiting systems, and to conclude, I'll present a method to extend the depth of field of introlimiting systems in the reconstruction stage. Well, as you will know, along the past decades, the idea of projecting 3D movies has died and resurrected periodically. This time, however, this idea seems to be here for a long stay. The proof of this is the current competition in the development of 3D TVs and 3D monitors. At present, there is a wide variety of methods to obtain 3D images, and all these methods can be classified into two major groups, 3D beauty glasses and glasses 3D. Additionally, systems using special glasses can be classified into two subgroups, passive systems in which the glasses have no electronic components and thus, no need of power source to operate, and active systems in which the glasses have some kind of electronic component and thus, need a power source to operate. In the other group, we have autostatoscopic systems. Such systems don't require the use of special glasses. From the theoretical point of view, holography provides the best 3D sensation achievable. However, at present, there are still some problems with this technique, which makes necessary much more research in this field. Our group focuses its research in inter-alimaging systems. These systems work with incoherent illumination and hence, can capture and display through color 3D images. Inter-alimaging can provide stereo parallax as well as full and continuous motion parallax, allowing multiple viewing positions for several big words. Due to its advanced degree of development, inter-alimaging could be ready for massive commercialization in the coming years. Now, some history about the technique. Inter-alimaging was initially proposed by Gero Limman in 1908 and has reborn about two decades ago due to the fast development of electronic mid-extensors and displays. The Limman idea was that one can record many elemental images in a 2D sensor so that any elemental image stores the information of a different perspective of a 3D scene. When we project this information in a mid-display place at the focal distance of a micro-anserade, any pixel generates a cylindrical ray bundle and it's precisely the intersection of these ray bundles which produce the local concentration of light density that permits the reconstruction of the 3D scene. The reconstructed scene is perceived as 3D by the observer in a wide range of positions. But what is more interesting is the fact that inter-alimaging can be used not only for 3D but also for the topographical reconstruction of 3D scenes. Again, if we project the elemental images in a mid-display place at the focal distance of a micro-anserade and now we scan around the axial direction with a diffusing screen that is parallel to the micro-anserade, we can recover the original objects and the defocus versions. Of course, such reconstructions can be made optically or by computer calculations. In this work, we are interested in performing computational reconstructions, focus, different depths from a set of elemental images that we have captured in our lab experimentally. Well, this is the setup that we use. Instead of using a micro-anserade, we have used the so-called synthetic aperture method in which a digital camera is mechanically translated to different positions in order to capture different perspectives of 3D scene. As we will see later, parts of the scene that appear blur in the elemental image appear also blur in the reconstructions. We have modeled the camera lens of the capture setup from experimental parameters such as the diameter of the endless pupil of the camera lens, its distance to the in-focus plane, and the lateral magnification between the in-focus plane and the camera sensor. By knowing the input response of the capture setup, we can reverse the out-of-focus blur caused by the optical system. As you can see in the equation, this is the input response of the capture setup. The input response strongly depends on the distance to the reference plane, so that if we know the distances of the different objects composed in the 3D scene to the reference plane, we can apply a depth-dependent convolution. To show you how this depth-dependent convolution works, we start by capturing a set of elemental images. With the synthetic aperture method, we capture a set of 11 by 11 elemental images with 2,560 by 1920 pixels each. We don't take into account the color information, but it can be included in the calculations if desired. Well, an interval image may be considered as a set of stereobars, and according to this, we can use well-developed stereobesian algorithms to get the despite information. Now, I'm going to show you how can we extend the depth of field of each capture elemental image. For example, let me take the elemental image in the left side of the stereobar, enclosed by the red line. From the stereobar, we can calculate the disparity map for this elemental image. And now, from the disparity map and from the information of the capture setup, we can get the depth information of the different objects composed in the 3D scene. So we already have everything we need. We have the depth information of the different objects composed in the 3D scene, and we have the theoretical DSF for the depth so that we can filter and decompose each axial elemental image for each axial interval. Depending on the complexity of the surfaces composed in the 3D scene, we must increase the number of axial intervals in which we divide the scene. For example, our scene was mainly composed by three objects located at three different depths, and we also have the vast background so that we divide the double scene in four axial intervals. The final elemental image is obtained as the sum of the different intervals after being filtered and decomposed. As you can see in the image, there are some features that lead to a poor visual aspect. For example, the average transitions are due to the border of the area selected in the depth map. And the ringing is due to the oscillating values in the calculated PSF, which causing repulse in the image rapidly changing intensities. These repulse could be reduced by using the convolution kernel with less oscillating subvalues, as long as this the convolution kernel retains the correct geometrical support. But this is not a problem because in the reconstruction stage, all the elemental images are superimposed computationally, and the ringing and the average transitions are aberrate and smooth, and the visual appearance of the reconstructed scene is improved. Now we use the back projection algorithm to reconstruct the scene for those different depths. And we compare the results with and without applying our method. For example, in the other row, you can see the reconstruction for the depth where the first toy figure was located. And here we have just applied the back projection algorithm. And you can compare these results with the results in the lower row, where we have previously applied the depth dependent convolution method to each capture elemental image. These are the results for the depth where the second toy figure was located. And finally, this is the reconstruction for the third depth. And as I previously mentioned, parts of the scene that appear blurring the elemental image appear also blur in the reconstructions. So if you compare the toy figure in the reconstruction of the other row with the same toy figure in the elemental image, you can see that they suffer from the same blur. From these results, we can conclude that depth dependent convolution method allows us to recover details that are impossible to distinguish without applying it. And to conclude, let me show you this video where we present a stack of reconstructions for different depths. In the last video, we have just applied the back projection algorithm. And in the right video, we have previously filtered and decomposed each elemental image with the depth dependent convolution method. And these are our results. That's all. Thank you. Wow. Does anyone have any questions for the author? There ought to be a lot of questions for the author on this. This is really spectacular. OK. Maybe I'll take prerogative here. So how did you characterize the PSF of the lens? Did you assume there was just one PSF, or was there sort of a three-dimensional space of PSFs that you would back project? Yeah, we get the input response of the capture setup, which is a 3D version of the PSF. I mean, you have one PSF for each plane in the object space so that you have a mathematical formula to characterize this 3D PSF. And then what we do is we slice the object space in axial intervals, and we calculate the PSF for each axial interval. And we filter and decompose each elemental image for each axial interval. OK. So then in your deconvolution step, I missed it. Did you back propagate the wave propagation, or did you deconvolve with the shape of the PSF to sort of divide out in the frequency space? There are two different steps. OK. First, you capture different perspectives of the 3D scene, and then each perspective is a filter and decompose. And then you take all the elemental images that you have filter and decompose, and you apply the back propagation method, and you reconstruct the scene at different depths. OK. And that leads to my third sort of final question, which was, did your results depend highly on the bit depth of the files that you used for this information, or the sensor noise to prevent ringing and stuff like that? I don't really know. OK. If you don't know, then it means it was not a problem. Wow, that's really interesting. So any other questions? This can't be. OK. Well, thank you very much. Thank you. Thank you.
|
Integral Imaging is a technique to obtain true color 3D images that can provide full and continuous motion parallax for several viewers. The depth of field of these systems is mainly limited by the numerical aperture of each lenslet of the microlens array. A digital method has been developed to increase the depth of field of Integral Imaging systems in the reconstruction stage. By means of the disparity map of each elemental image, it is possible to classify the objects of the scene according to their distance from the microlenses and apply a selective deconvolution for each depth of the scene. Topographical reconstructions with enhanced depth of field of a 3D scene are presented to support our proposal. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30598 (DOI)
|
Thank you, John. Well, the history of 3D with polarized light has been fascinating to me, and I hope is to you too. I'm going to start with a little prehistory, because I think it's fascinating that Wheatstone looked into binocular vision early on, and it was even before photography started, and he designed the mirror stereoscope and showed with stereoscopic drawings that our two eyes function together to make depth perception. And these are some of the drawings that he used to demonstrate this. He lectured before the Royal Society in 1838, and photography came along just a few years later, and immediately people were fascinated. He designed a mirror telescope, I'm sorry, stereoscope, which is not unlike what we can use today, and this type of stereoscope has been very much used by stereographers and is still very important. Then the next development that became important was the realization that you could put two images, one over the other, and code them. Hugo Stavron invented the anaglyph back in 1837 to 1929, but he had a patent, a US patent that was issued around 1900, and the anaglyph, of course, is still very much used. So that's sort of our background, and a British physicist, John Anderton, decided that one could encode by polarization, and he designed several types of polarization incorporated into projection devices and viewing devices, and this particular patent was issued in 1906. It shows polarization by a pile of plates, a pile of glass plates, polarizers, which is a very impractical solution to the problem, and as far as we know, we never actually built this device. This is how the pile of plates works as a polarizer, and at the correct angle, the reflected beam is polarized, and the transmitted beam is also polarized. So that was the situation until Dr. Land came along, and he was fascinated by polarized light, and he was also intrigued by a publication by the British physician, Dr. Herapath, who put together, for some reason we don't know, added iodine to the urine of a dog that had been fed quinine, and got these marvelous polarizing crystals. They were very tiny crystals, but they were highly polarizing, and this is only recently was the structure of Herapathite, which was named after Dr. Herapath, understood, and this is a picture of the structure of Herapathite, and what's interesting is that the iodine arranges in a very neat rule. The purple spheres are the iodine atoms, and iodine itself can form sharp polymeric chains, and so this was a very effective method of polarization, but the crystals were very small, and Herapath and others tried for years to grow large crystals, and it seemed impossible. At one point there was a German scientist who apparently grew large crystals of Herapathite, and I'm not sure, well that's Dr. Lann, I'm sorry, I've gotten out of sequence a little bit, but the Zeiss Company was marketing polarizers that were made by presumably large crystals of Herapathite. The scientist who developed the large crystal system was responsible for what Zeiss was making, but just a few years ago the scientist who worked out the structure of the Herapathite actually bought a polarizer from Zeiss and took it apart and could not find a large crystal. It looks as if maybe it was a plastic embedded with small crystals, which is what Dr. Lann had invented, but we don't know. At any rate, pre-polaroid Lann had formed a company, Lann Wheelwright, with a physics instructor that he met at Harvard, and they formed the Lann Wheelwright Company, and the system that Lann used was to mix the tiny crystals of Herapathite into nitrocellulose and extrude that material to orient the crystals, and that was the basis of the first commercial polarizer, which was J-polarizer, and that's Dr. Lann at an early time with two polarizers crossed, and you probably recognize that as the symbol for Polaroid Corporation, which he formed later. That was the beginning. In 1938, Polaroid introduced the H-polarizer, which is a polyvinyl alcohol, which looks like the top row and below is the, on the right, is the polymeric iodine, which attaches very strongly to polyvinyl alcohol, and that forms the H-polarizer, which is still the polarizer that is used by everyone now, and it's very efficient. So that was the beginning of Polaroid Corporation, but even before Polaroid, before the Polaroid Corporation formed, Lann and Wheelwright were selling J-polarizer to Eastman Kodak for making camera filters. So in 1938, Polaroid introduced the H-polarizer, which is this polyvinyl alcohol, iodine complex, and that is still the main polarizer used. In 1934, Lann and Wheelwright met Smith College Professor of Art, Clarence Kennedy, and he specialized in Italian sculpture of the Renaissance period and was photographing to show the details of sculpture and was very interested in being able to display three-dimensional sculpture. So he hired Lann as a consultant and he obtained some funding from Carnegie, and Lann provided Kennedy a stereo camera, which was a pair of cameras, and a stereo projector, a pair again, and Kennedy became a consultant to Polaroid when it formed in 1937. And his work is still very exceptional, and he has thousands of photographs of sculpture that are in the archive of Harvard University's fine arts, and he served as a consultant to Polaroid for years, and Kennedy is the person who coined the name Polaroid, which of course became the name of the corporation that Lann formed in 1937. Now, as early as 1934, Lann and Wheelwright were experimenting with stereo motion picture photography, and they had parallel 16-millimeter cameras and a pair of projectors, and they went to Rochester and demonstrated this before Dr. Meese, who was the director of research at the time at Kodak, and Meese became very supportive and provided them an endless supply of color film, which at that time was Kodakolor, which was an additive material. Kodakrome didn't come along until 1935, so Lann and Wheelwright presented the first stereoscopic full-color motion picture demonstration at a meeting of the Society of Motion Picture Engineers in 1935, and in 1936 the New York Museum of Science and Technology opened in Rockefeller Center, and they had a display of polarization 3D, which they called Polaroid on Parade, and that was 1936 before the Polaroid Corporation formed even. The manufacturer of polarizers at Zeiss was going on, and they were actually demonstrating 3D movies in Germany in the late 30s, and their catalog still lists the Bernatier polarizing filter, which is what they claimed that they could make it large enough to cover a windshield, but there's no evidence that that really happened. At any rate, there was interest at Polaroid also in doing polarized headlights and windshields to protect against the glare from oncoming vehicles, and Polaroid for a number of years was trying to market this to the automobile companies, but that never happened. It became less important as we developed divided highways, and oncoming cars are not quite so much of a problem. The next big thing was the 1939 and 1940 World's Fair, and in the Chrysler Pavilion there was a 3D movie shown. It was black and white in 1939, and then in 1940 it was in Technicolor. The movie showed the self-assembly of a Plymouth car engine in which the parts of the engine moved themselves into place. There were no people showing, just the parts dancing into place, forming the engine. This is quite an exciting movie, and millions of people saw it at the Chrysler Pavilion. I guess a few people have seen it recently because it's been shown at the 3D Expo, and that was a pretty big milestone. I guess the next thing that happened at Polaroid was the Vectograph process, and there was a Czech inventor and stereo enthusiast called Joseph Mahler who wrote to Dr. Landon and suggested that the Antiglyph principle could be realized with images in terms of polarization. Dr. Landon invited Joe Mahler to come visit, and subsequently he moved to the U.S. and became part of the Polaroid Vectograph Research Lab. He and Landon co-invented the Vectograph, and all this happened starting in 1938, and Mahler left Czechoslovakia just shortly before the Nazis took over this country. That was a very good confluence of events. I guess I should have moved some slides here. Well, that's all right. This is the 45-degree stretcher, which became very important in construction of Vectograph film. The film enters the machine at 0.90 degrees and emerges stretched at 45 degrees. What's important about that is that you could then laminate two such layers back to back, and they would be obviously oriented 45 degrees. That also made it possible to make long-winds of film that were suitable to motion picture production. There were big possibilities there. I joined the company in 1944, and when I joined the company, I was assigned to the Vectograph Research Lab, and Joe Mahler was there, and Sam Ketroser, who was a wonderful photographer engineer, had come from France. He was born in Veseravia, and had fled the communists and gone to Paris. Then he met an American woman and decided to come to the U.S. with her, and he became a very important part of Polaroid. He established a Vectograph studio and took many pictures trying to optimize the camera conditions. I'm getting ahead of my story again. That's the Zeiss catalogue page that shows the Prenetar filter. The scientist was a man named Bernauer. That's a picture of Joe Mahler about the time that he came. I'm sorry. I think the slide that says Vivian need a picture says that this was the set of slides we were working on, and not the final set, but that's all I wanted to say. Anyhow, when I joined Polaroid in 1944, the company was very involved in supporting the war effort. In December of 1940, Dr. Land had set as Polaroid's priority the support of the war effort. Over the next several years, they provided the military with many optical devices, including specialized goggles, the Reneight Vision goggles, and they developed a cast plastic optics department and made optical elements very quickly. They developed an optical ring sight and a heat seeking missile system, as well as the Polaroid Vector Graph. To expedite the use of the Vector Graphs for aerial reconnaissance and surveillance, Polaroid established a war school in this building, which is an historic building in Cambridge. It was at that time the factory of the Walworth Company, which is a coincidence because we're only distantly related Walworths. This building was recognized as the place where Alexander Graham Bell made his first long distance phone call. The Walworth Company offices were in South Boston, and they borrowed some telegraph lines and hooked up a telephone at each end and had a two and a half hour conversation between Bell and Mr. Watson, who was famous for having taken the first phone call when Edison said, Mr. Watson, I want you, and he could hear him. But anyhow, they had a long conversation. So the telephone pioneers of America recognized this site, and this is a plaque that they put on the building. And of course, now it's also recognized as the place where Dr. Landhead is office and lab, just inside where that plaque is. And we're working now to get a plaque to honor Dr. Land's use of that building. And I think that's going to happen. So in the course of the vector graph work, the school was set up in this building to train military technicians in the making of vector graphs. And Polaroid made a, this is a picture of the school with students in it. They came for a two week course, and then they went out to serve in the field and Polaroid shipped kits that were packed in foot lockers. And they were used both in the Pacific Theater and in Europe. And I've got a few, oh, this is the aerial camera that was popular at the time. And the aerial reconnaissance planes would shoot successive pictures as they flew over a terrain. And two successive pictures made a very good stereoscopic pair. So that was an important part of the battle at Guadalcanal. We were able to detect the Japanese camouflaged gun emplacements and other things. And that was a very important piece of the work. I'm sorry, I've got the wrong slide here. There was a Bay of Fig's picture, but that came much later. These are some of the pictures that were made in the school. Goodness, I'm sorry. There was a Professor Jack Ruhl at MIT who made a number of vector graphs teaching celestial navigation. And in the demo session I will have some of his vector graphs to show. These are more pictures. They also made on off girls for fun. The two pictures don't have to be stereoscopic. Polaroid continued to make vector graph kits, not in foot lockers, but they sold the vector graph materials and instruction books for a number of years after the war. And meanwhile, they were also working on color vector graph. I'll see what comes up now. Oh, I've gone way ahead. Let's want to go back to the demo. We're into the 1950s. And that was the beginning of the 50s 3D craze, which didn't last long enough, unfortunately. That was a picture of the audience at Bonadil, which appeared in Life Magazine in, I think, 1941, a 51. And then we come to Melody. And that's a big adventure. Polaroid and Technicolor worked together for a time in the early 50s trying to develop vector graph motion pictures. And they printed, they were using at that time the impovision printing process, which is like dye transfer, where you make relief matrices in gelatin, dye them, and then transfer one color at a time to make a three color image. The course vector graph requires two of these. And we had a good dye transfer process working in the lab, but it was so difficult and really demanding because you had to have six matrices which were in perfect register and transfer them all to one vector graph sheet. Polaroid made a number of those. Dr. Land gave a talk before the Optical Society in 1973 showing lantern slides that we'd made in the lab, but it was too costly process to be developed commercially. So that didn't happen. But at Technicolor, they had good equipment for synchronizing. And so they had a pilot machine and they produced a test on a live action film, Tauza's Son of Cochise, and they produced a test film of Melody. It never got me on the pilot stage, of course. And Melody was the first Disney cartoon that was shot in 3D, so that was a wonderful test. The movie producer, Frank Ford Coppola, was very interested in 3D. He had visited Dr. Land and he decided in the 90s that he wanted to include a 3D sequence in a film he was planning. And he assigned his associate, Kim Aubrey, to look into this. And Kim got me to come to San Francisco and I met with them and we discussed at that time Polaroid was still able to make the 35 millimeter film, but they hadn't done it for years. And it didn't turn out to be practical. But in the course of this investigation, Kim and Coppola visited Technicolor and found out that nobody there knew anything about it. The president management just was unaware that there had ever been this work. But a long time employee who worked there mentioned that he remembered working on Melody. And Kim managed to track down a person who had saved pieces of the Melody film. He had apparently cut it up to give out clips to his friends. But Kim and a professor at Cal Arts tracked this down and Kim was able to piece it back together using a complete Melody film and 2D for guidance. And they put together a really nice 3D Melody, which I was very happy to see. I went to a showing at the Academy of Motion Picture Arts and Sciences where they were reviewing Technicolor films that had been made by the impovision transfer process because there was growing concern about the instability of the negative positive films that Technicolor had gone to. And at the same time, Kim was able to show me the 3D Melodies that they had reconstructed. And I understand now that that's been shown at the 3D festivals that have been put on. And it's really quite exciting that it still exists and is in reasonable condition. So in the course of following up on this with Kim Aubrey, I talked to Scott Duncan, who was the professor at Cal Arts, and to Jeff Joseph, who has managed the 3D festivals. And Jeff told me that he had recorded an interview with Richard Goldberg, who had been the director of research at Technicolor at the time we were doing this work together. So I was able to listen to these interviews with Dr. Goldberg. And I found that very exciting. Dr. Goldberg is still living, but he's 89 and not very well. I talked to his wife recently. And it's just interesting to be able to reconstruct all these things. So the Restoration ability was just a very exciting development. And did they show it again last year? So it's still doing fine. I have a couple of frames of Taza's Son of Cochise, which I'll put on display at the demo session. And it still looks good. So I guess that's about all I need to say about Vectograph. Let me see what else is here. There's Son of Cochise. This is taken from a talk I gave in, I think, 1985 or so at the... I'm trying to think what it was called. It was a conference of... Similar to this, that was before the Stereo Displacement Applications. I don't recall the name of the conference now, but anyhow, we talked about the difference between circular and linear glasses. And this slide shows the difference at tilt angle with linear and with circular polarization. And there's some obvious advantages to circular because you can tilt your head and not lose the stereoscopic effect and not see double images. And I was very pleased when Real 3D came along and decided to use circular polarization. And Lenny Lifton had been at the earlier meeting and I think that influenced him. I hope it did. So that conference was called Three-Dimensional Display Techniques. And it was in 1983. And then the SVAE conference in 1984 was Optics and Entertainment. And I was a proponent of circular polarization and I had gotten Polaroid to make circular polarizing viewers and test them when JAWS 2 or JAWS 3 was introduced. JAWS 3, thank you. And Polaroid showed it in 2D in some theaters and had other theaters show it in 3D. And they did not report a significant difference. So Polaroid did not pursue that. But fortunately Real D did. So I guess there we are with Real D. And I think it was very convincing use of circular polarization. And that of course goes on. I'm sorry my slides are really quite out of order. But these are crystals of herapithite. And that is what Dr. Land had observed before he learned to align them. Well I don't have to worry about slides anymore. The other things I wanted to mention were liquid crystal polarizers which came along. And crystallize was Lanny Lipton's first real use of 3D for display and computers. And the point that makes liquid crystals interesting is that they are rapidly reversible electrically. And I guess the next thing that came along was the micropolarizer and sodic ferrous of Revio company displayed really beautiful 3D slides at the SDNA conference in 1994. And then V-Rex has made use of that in a number of products. Then in 2010 the LG company introduced its film type pattern retarder which utilizes a very thin liquid crystal pattern layer that can be coordinated with the 3D images. And the film type retarder coordinates with the raster screen. So every other line of resolution is lost. But the patterns are so fine now that it really doesn't matter. And the film type retarder I think is really what will predominate in the television 3D. So I guess where we are now is that 3D television is coming along and certainly the 3D movies are doing very well. And we're very fortunate that all this is working as well as it is. And I'm sorry that I got my slides jumbled but I think I've told you about everything I know. So if there are any questions I'd be happy to answer them. Steven has the microphone. If there are any questions or comments please put your hand up and Steven will supply the microphone to you. When you were working at Polaroid how many other women were working in your field? How many other women were working in your field or at Polaroid? How many other women? Women? Women? Not girls. Quite a few. Polaroid was very liberal. And I should mention perhaps that there's a certain amount of serendipity in my being at Polaroid because when I graduated from the university I knew I was going to live in Rochester because my husband had taken a job in Rochester. He graduated a year ahead of me. So I knew I was going to go to Rochester and I had an interview with a recruiter from Kodak and he assured me that there would be a job for me at Kodak. I had good grades and good recommendations. So when I got to Rochester I went to Kodak for a follow-up interview and they did offer me a job but it was a secretarial job and they explained that they didn't use women in the laboratory and this man who interviewed me was kind of a fat, tweedle, dumb looking person and he said, you know, in the plant you might have to climb over a steam pipe or something and I looked at him and thought, you know, I could do that better than you could, buddy. But anyhow, I did not take the job at Kodak and in Rochester there was also a company defender photo supply which later became part of DuPont and I was able to get a job in the research lab there and that was fine and our director of research came back from a meeting with a vector graph that he had been given and I was very impressed. So when we moved on to Cambridge my husband took a job in radar technology and moved to Boston to the submarine signal company which was doing sonar and radar at the time and I just went right to Polaroid and applied and I might not have even known about Polaroid if it hadn't been for that vector graph I had seen so that was pretty exciting and Defender was a very nice company to work for but I discovered that they had a peculiar strategy for wages. Everybody started at minimum wage and then you could advance but I found out that the part time student in our lab who was washing dishes was making more than I was because he was male and I was female and as I remember I was getting $18 a week and he was getting $22 so I complained and they raised me to $22 which was nice but when I went to Polaroid I discovered that there was not a difference between men and women as far as salary was concerned and Polaroid hired quite a few women and Professor Kennedy every year would send his best student to Polaroid and these would be arts majors and Dr. Land had the feeling that the confluence of art and science was very important and he could teach people to do experimental work and so some of his best people were art majors and Polaroid was a wonderful place for women. Anything else? I'd like to ask you about Stereojet because in the Polaroid story Stereojet is probably the latest step. Stereojet. Stereojet. Oh we formed Stereojet just a few years ago. I had well I guess I didn't mention but after making color vector graphs in the lab by the laborious dye transfer process when inkjet came along in the 1980s I thought that was really the way to do it and we did some very nice tests at Polaroid but the company was not interested in pursuing it so after I'd left Polaroid in 1985 I did some other things for a while but Dr. Land had formed the Roland Institute and I interested the Institute in developing the inkjet process and J. Scarppetti had a small lab devoted to developing Stereojet and he was a wonderful person to work with and we did show Stereojet at ISNT in the 1980s I think it was 1984 and I gave a paper also I think the following year on some refinements of the process but then Polaroid I'm sorry Roland merged with Harvard and that really shut down the project. J. retired and has since died. Then Polaroid had sold its fullerizer division to 3M and 3M continued making the 45 degree vector graph sheet for a time and then they closed that plant and that machine went into oblivion. It was presumably stored in Rochester but there was no possibility of making 45 degree sheet anymore so we decided that we should try to use the 090 configuration and we could then use linear stretch polarizer instead of 45 degrees and you can take 090 and cross them and have a stereo pair and if you add a circular polarizer you can convert the polarization to circular polarization and so that's really what got us started on Stereojet Incorporated and we're using a sheet that is linearly stretched and combining two so instead of printing on two sides of a 45 degree 45 degree sheet we're printing on 0 and 90 and putting the two together and then adding if we want circular polarization adding a retarder but we could also use linear polarization with 090 glasses and so that's what we're doing. Anything else? Thank Vivian for this talk but she'll be around here and we can continue to ask questions because
|
Stereoscopic photography became popular soon after the introduction of photographic processes by Daguerre and by Talbot in 1839. Stereoscopic images were most often viewed as side-by-side left- and right-eye image pairs, using viewers with prisms or mirrors. Superimposition of encoded image pairs was envisioned as early as the 1890s, and encoding by polarization first became practical in the 1930s with the introduction of polarizers in large sheet form. The use of polarizing filters enabled projection of stereoscopic image pairs and viewing of the projected image through complementary polarizing glasses. Further advances included the formation of images that were themselves polarizers, forming superimposed image pairs on a common carrier, the utilization of polarizing image dyes, the introduction of micropolarizers, and the utilization of liquid crystal polarizers. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30601 (DOI)
|
Okay, let's start. I will describe your really funny project. And I hope many interesting things. You know that it's very common situation when people go out from Cinnamon and claim that they have pain in their brain. And even in my family, that believe me, pretty well known, educated in 3D. I have persons who was really disappointed after visiting all 3D movies due to pain. So the question is what artifacts in movies? What artifacts we have in movies and how often we have these artifacts and how often we have these artifacts in different movies. Okay, I will make a short introduction. Next, proposed methodology application and preliminary results. This is a real frame from Resident Evil. And you can clearly see at first essential difference in color. At second, pretty well visible vertical parallax. So it's really untypical, impossible situation in real life when one UI see picture in this level and another I see picture in another level. And it's really painful. And you can see difference in color in visualization. It's measure. And it's another sample. This movie that was shown on cinema have positive parallax 5%. And this means that for typical distance between eyes, your screen size limit to 55 inches. This means that even at home projectors, you will have a situation when your left eye will see picture left and usually your right eye right and usually they will see picture in different directions. And that's also painful. And here is sample from drive angry. You see also rotation. Clear with difference in color. And it's difference in sharpness. This is the table. And question is how to find such problems in movie. And next question, of course, how to fix all such things. We get several blu-ray disks and analyze them. Initially we analyze trailer and currently we analyze full length movie. And I will show you some preliminary results for converted video. Again, there was constructed several metrics. And it was pretty long time. Our first metric in the U.S.C. And many results. This metric was constructed in 2011. So this was more than one year ago. And so we spend a lot of time to tune them before publications. And next we have a per frame analysis and general comparison of different movies. What does this mean in few words? Here is sample of color mismatch metric. And it's a big value out of chart. It's moderate value and small values. And here is sample. You can see it's moderate value, see difference in color, but not so big. It's big value. You remember this frame. Very essential. And it's very moderate difference. And we can see yes. It's really very moderate difference. And we analyze and full movie. Another type of analysis is analysis of parallax. It's pretty close to results that was in previous presentation. But here is a lighter area means that big amount of objects are on this position. So you can analyze not only boundaries of objects in depth, but also how many objects is there. And also pretty useful from a point of view type of analysis is integral histograms. When you can see that how different values distributed through a movie. And it's different values. And you can see that here is very very good color. No color mismatch. Another sample. Yes, you can see a little bit color mismatch. And here you can see very essential color mismatch. And here absolutely essential. It's Dolphin Tale, children's movie. Okay. And let's see some problems. It appears resident table, different sharpness. Using this matrix we found a thousand of mistakes even in movies like Avatar. It's a difference in sharpness penalty map. Also Avatar and also difference in sharpness. Also Avatar. So here you can see clearly visible difference in sharpness. Fortunately it's not so bad. But sometimes even such things means essential pain. For example here is it's resident table. It's computer graphics. And if you have enough of this computer graphics you know then you have motion blur. Check box for objects. And here they obviously turn on this check box for one eye and off for another. So you see. And of course this will be painful. And it's computer graphics such things can be really easy fixed but looks like they have not so good quality assurance. Another sample. Drive angry. And we have thousand of samples. How it's extra blur in mark for right and left eye. So they have really different parameters for different cameras. Very very very bad thing. Channel mismatch. When your channels are swapped and left eye receives the right channel and left eye receives left channel. From a symmetrical point I feel this is really a challenging task because it's not so easy to detect such things. And such things can be detected using really good analysis of occlusions. And it's real sample from Smurfs. Also children's movie. And this scene was very short. So they miss this mistake. And if you will a little bit, you will see that left and right you are obviously swapped. And it's interesting that in trailer they have this mistake and in movie they fix this. So in movie they have no such mistake. Another sample. This is three musketeers. Last one. And you can see man. You can see some candles on the background and special effect for ground. And you can see that special effect is over man. And you can see that this special effect moved obviously on background depth. So it pointed over this man but on the depth of the background. And unfortunately we know samples when all special effects in movie was swapped channels on wrong depth. Sometimes it happens. And it's very interesting overall charts. General analysis of depth budget for full movies. It's release date. And it's very old into the deep. It's three musketeers. You can see that it was shifted on the screen plane. And I'm as in Spiderman and other few more. And after new movies you can see the analysis. And for professional such kind of diagrams are really interesting to understand how big amount of sense in movie have good depth budget. How big amount of sense in movie have looks pretty flat. And difference in vertical disparity, vertical parallax. They told you very painful thing. And you can see here is old movies. Yes, they have a lot of such mistakes. It's avatar for those time was really very good. A lot of movies after avatar have very essential vertical parallax. You can see here is Spiderman, Titanic have very good values of vertical parallax. Titanic was converted. Of course, it's very good. But by the way, we find some sense is probably even in it. And color is much. You can see also analysis of color is much. Fortunately, we have new movies after avatar that was essentially better than avatar in terms of color is much. And sharpness is much. Also fortunately for new movies we have not so bad situation. So cameras become better and quality become better. Applications, you understand that it's really possible to detect these things. And if it's possible to detect them, it's possible to fix them commonly. And for example, it's sample that I show you and it's corrected. And you can see how it should be visible. Clearly visible difference. And even such as also difference in sharpness. It was mistaken during capturing. And it's corrected too for blurring. And you can see currently both of you are blurred. And it's easy to do those things. And these things can be done automatically without problems. And from mathematical point of view, more challenging task to increase sharpness on blurred channel. It's really challenging but it's also possible. It's sample when both channels become sharper. So again, from mathematical point of view, again, all these things can be fixed now. Question is to move such technologies to create from algorithms technologies and move technologies to industry. And it was sample how we're trying to move these things. It was a 3D stereo festival, international festival that take part in Moscow. By the way, some here was hook it. That was yesterday on 3D theater. And here is depth budget comparison. And also, I hope at least next year we make similar comparison for 3D theater of this conference. And some interesting things for future work. If you are in a family with 3D, you know that many blockbusters currently are converted. Maybe something like half of blockbusters. So question is how to analyze the quality. And here is sample of each sharpness mismatch metric that we run on Avatar movie. And you can see here is clearly difference in sharpness for the boundary of window, for that right eye and it's in close distance. And we create such metric and find thousands of such artifacts in Titanic. It's flat object monitoring. It's also because in converted movie sometimes we have situation when object is, for example, some running mail is really flat and it's visible that he's flat. And here is sample of the object and difference between right and left channel. You can see that this object really flat. Another flat object. Another flat object. And stretch. This means that it was used very simple. And really funny thing, hair cutting. So you can see if you are familiar with this algorithm that necessary for the things called video matting and it's not so easy to do this thing. And you can see that during converting they cut all this hair and same thing here. Clearly visible hair cutting. And we want to move this analysis forward and we are invited, we plan to publish several reports with analysis of movies and still we are welcome to the support. And also we create a system to measure movie. All our metrics work not so fast so we need a cluster of computers and we write a system to measure all these things. And all the researchers are invited. We can use your metric to measure big amount of movies. And also we can evaluate movies and provide artifacts if you want to make subjective analysis of these artifacts. I think this would be useful. So no more low quality 3D if possible. Thank you Dmitri. Any questions for a very thorough set of tools then? I think first. Hi, yeah, I just want to ask you, you talk a lot about mismatches in sharpness being a problem with stereo 3D but I was just wondering how much of a problem is it in the sense that I have seen some studies that suggest you can get away with having one eye, one view more blurred than the other view. It seems that your visual system kind of just to the sharper image. I don't know how true that is. I just ask for your opinion please. We work in this area during the last two years at least. And during last year we essentially involved specialists in human vision and specialists in binocular vision. And they told us a lot of interesting things and I want to tell you two things at first. If you talk about difference in sharpness then it's interesting that a young man will still not see this. It will be okay for them because they have very good accommodation. And even most people here have situation when at the end of the day one of your eye more tired than another. And picture from one eye have another difference. So your brain have very good mechanism to compensate difference between sharpness for two eyes. But problem for movies is that difference in sharpness appear very short period of time. In real life when you are on your eye tight during all evening you have blurred picture from right eye for example. And in cinema the situation changes sometimes several times per second. And your brain of course it's a question how it's painful exactly this thing. But now due to last research we can claim that all the people feel these things better than younger people. This was answer. And my personal answer is that guys it's possible to solve this. It's possible to compensate this. Why you want to stay these things in movie if it's possible to fix this. Okay. Thank you very much. Sorry. We have to grab to be too late. Okay. Let's move on to our next speakers. Thank you.
|
Creating and processing stereoscopic video imposes additional quality requirements related to view synchronization. In this work we propose a set of algorithms for detecting typical stereoscopic-video problems, which appear owing to imprecise setup of capture equipment or incorrect postprocessing. We developed a methodology for analyzing the quality of S3D motion pictures and for revealing their most problematic scenes. We then processed 10 modern stereo films, including Avatar, Resident Evil: Afterlife and Hugo, and analyzed changes in S3D-film quality over the years. This work presents real examples of common artifacts (color and sharpness mismatch, vertical disparity and excessive horizontal disparity) in the motion pictures we processed, as well as possible solutions for each problem. Our results enable improved quality assessment during the filming and postproduction stages. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30603 (DOI)
|
Hi, thank you for that exciting introduction. Welcome back to this afternoon's session. So we have a new thing in our marketing department at Durham that we have to show a picture of Durham everywhere we go. So I've done that now. This is a talk today about stereoscopic game design and evaluation. And much of the talk is based on a project by one of my students, Joe Rivet. But before that, I'm going to talk a little bit about some of the things we've done over the years in game design for stereoscopic displays. And the reason I'm slightly motivated to do that is because last year talking to Ian Bickerstaff from Sony, he didn't seem to think there was any evidence that stereoscopic games worked. So hopefully I can convince you a little bit of that. The challenge we'll get onto in this talk is how can you create a game that can only be played in stereoscopic 3D? So a game that would be impossible to play on a 2D display. And so we'll go through some of the design decisions we took to get to that point. And we'll see how well we did. So as a caveat to this is that you shouldn't play the game as a random dot stereogram because then of course it wouldn't work at all in 2D. And that will be cheating. So before I get onto that, I would look a little bit for just a couple of minutes at some of our previous work in games, which I found out we've been doing since 2004 when I look back to what we started doing. It was very popular with students for some reason. There's a couple of games of asteroids that we produced, William Pegg produced this game in 2004, which is a 3D version of asteroids. So you steer in a cube of space, you blow up the asteroids as usual, and you get scores for how many of these things you shoot. And he tested this on a 2D display and a 3D display. And all the results will show the 2D result first and the 3D result. And playing this game on a stereo display, you've got a 46% benefit from using stereo. That was a mean over eight different players. So the following year, Dave Woodhouse built a very similar game. You can see the cube of space slightly more clearly here that the players played in. The cube of space we used throughout all these games to keep the budget limited so that we weren't using excessive debt for a jet, nor were we compressing the geometry. So the geometry was looking realistic. Then he had 12 players this time playing and they were achieving a score of 114 in 2D versus 191 in 3D. A 68% increase in performance from using a stereo 3D display. One of the two additional things he did was to add in aerial perspective. So this is like fogging in the game to give you another depth cue. And he found that had a remarkable effect on your performance by 146%. So it was much better than just using stereo alone. We didn't investigate how or why, but it was a measurement we did again with 12 players. And one of the questions that seems to come up is, does stereo work on cell phone size displays? So back in 2005, we started studying this and we said the same game played on desktop display for these results. We played it on a cell phone size display. And we pretty much got the same results for stereo as we had for the desktop size display. And we'll see that repeated again in a second. I'm going to call this a selection game that we did in 3D and 2D. The idea was some of these squares were in a plane further back, some of these squares were in a plane closer to you. And you have to go through and select all the ones that are closer to you. The reason we were doing this was in fact for an eye surgeon who was looking at diagnosing diabetic retinopathy, one of the things you have to do is separate out layers of objects. But it could just as well be a game where you're having to hop across lily pads or something. And anyway, again, comparing 2D to 3D with 13 players of the game, we've got a 28% increase in performance from using stereo 3D. This is a stereo matching task we presented here a couple of years ago. I sadly hunted high and low for this. We also had a student actually implement Tetris in 3D. And I couldn't find the exact results from that, so I've repeated these instead. I think they were similar. There was a distinct benefit from playing Tetris in 3D. And here you're seeing who did this work should a 28% increase mean across 15 players from using 3D. So there's kind of a story here that 3D is good for gaming if the game has some 3D element in it. And again, like Dave Woodhouse, he tested it on a cell phone size display and got pretty much identical performance to the desktop size display. And then just last year, Jacques Kaiser, who was a student from France who came to work in the lab for a year, he worked on, I hope it's obvious from the picture, he worked on a game of snake in 3D where you had to fly through this environment and not crash into walls and eat enough possibly cans of Coke, although he called them barrels of food. And he had 12 players play this in 2D and in 3D. And again, he saw about a two times increase, 111% increase in scores playing it in 3D. This is a bit more complex than the previous games because it's actually a navigation task in 3D. So on average across all those games, we saw about 71% increase in performance for players playing in 3D. So if you're a competitive game player, why you haven't already bought a 3D display is a mystery, I think. And as well as the game performance we found, the cell phone didn't have any negative effect using a cell phone size display. OK, so on to our 3D only game. When you wonder about what kind of design goals you have in producing games in 3D, then we came up with this idea that really you need to make sure that you're controlling the depth cues very carefully, that the point players are making digitized depth judgments. They're just sitting there going, wow, it's 3D. You're not so worried about that. But at the point where they have to make a depth judgment in the game that affects their performance, that's the point where you have to worry about what depth cues are there. You can now do remove them, depending on if you want to make it easier or harder. And if we're trying to make it playable only in stereo 3D, we're going to try and remove all of the non-stereo 3D depth cues and just keep the stereo ones. The game we came up with, that Joe came up with in the end, was this one where you've got a little spaceship flying across the screen, in fact, with hoops coming towards it, like threading a needle. And as each hoop comes through it, to score a point, you have to go through the middle of the hoop. And when you do, you get a point. The view is kind of down onto top of the spaceship as it flies through the scene. So you have to move the spaceship up and down the screen and in and out. And it's that in and out movement that's critical that we try to have only supported by the stereo depth cue. So when we got to the occlusion cue, by designing a game like this, the only point at which occlusion was helpful to you was by the time you got to the hoops and you'd hit them or not, and it was too late by then. So we managed to, all the very hard in a real time game to remove occlusion altogether, we removed it from the decisive depth judgment, the point where you're making these decisions about where to steer through the hoops. So it still happens, but not when it's at all useful to you, normally a very strong depth cue. Linear perspective, we wanted to use because a sense of realism in 3D environments is there with linear perspective. If you make it orthographic projection, you get very odd sensation in the 3D. So we retained it, but we tried to get rid of the benefit it might have in making any kind of depth judgment by changing the hoop size and positioning depth randomly. So this is the normal game view from above. On the right is a view from behind the spaceship, looking at these hoops as you're coming in as a spaceship, from a sort of spaceship captain's viewpoint. And you'll see although here, you might judge them in 2D to be about the same distance from you, in fact, one of them is much closer, one of them is much further away. And it's that critical depth judgment there that we're getting people to make repeatedly where's the hoop in depth to steer through it. And so we're trying to confound any ability of linear perspective or in fact known size because we randomize the size of the hoops. So there's no benefit from either of those in making these depth judgments. Light and shade, earlier on in Dave Woodhouse's Asteroids game, he'd use shadows on a ground plane to give you an idea of where the asteroids were relative to the ship. And that had helped a little bit. So we got rid of all of the shadows. There's no ground plane in space for you to cast shadows onto. We do use lighting to give you the shading cues for shape, but we don't use it in any way I think that's going to help you judge the relative position of the spaceship to the hoops. So although this isn't by any means called of duty or some sophisticated graphics program, it's still using the same kind of cues that you do. And again, you're under a good monocular cues to depth texture and aerial perspective. We removed aerial perspective completely. There's no fogging in here. You can't tell the depth of the hoop from any kind of fogging shading on it. And there's no texture on the hoops. So you can't tell by the change in size of the texture where the hoop might be relative to the depth of the spaceship. So the nice of those, we hope, were useful depth cues. Motion parallax, very often used in 2D games, severe sensation of depth. Parallax scrolling, where different distances scroll at different speeds, very widely used for a long time. But in perspective, again, different objects at different distances will move at different speeds across the screen. We essentially removed that by only having very limited game volume. If we go back to here, this is the top view. This is the other side of the cube that you're playing in. And that's the total, well, cuboid you're playing in. That's the total volume we're in. So there's not a lot of depth in the scene. So there's not a lot of motion parallax cues available. There are some. And we found that some experienced gamers eventually tweaked on how to use them in the 2D game by moving the spaceship a long way off and then judging the hoops relative to the spaceship. But it still didn't make it that easy for them. OK. So what results did we get after trying to make it impossible? We had 17 participants, all screened for stereo vision. We gave them a few minutes to practice the game first because steering things through hoops wasn't natural to everybody. And we randomized the order. People played it in 2D and 3D at different times. But all of them had to try and go through 20 hoops. If they hit a hoop, the game paused for a few seconds and then restarted with the next one. But they weren't deducted points for crashing into things. They didn't explode and die either, which is good because they were students. We don't want to kill off our students. So we took two sets of results. One on the desktop 17-inch display, which actually display RealD gave us two years ago, and one on our 4-meter projection display. If you want to look at the results, the top line here in red is the scores for each participant in 3D. The line in green is the scores they got in 2D. And the blue line is just sorted by the difference between their 2D and 3D scores. So you see the mean value, you weren't able to read it back there, but the mean value in 2D for this was 5.24. The mean value in 3D was 17.5. So about three times better performance in 3D than in 2D. So it wasn't impossible in 2D, but I suspect the chance level, the number of hoops you get through anyway, even if you did nothing, was a few. So we're probably doing slightly better than chance. We're getting some cues still. When we moved on to the big display, the 4-meter display, so we've tested cell phone side displays. Here we wanted to test projection size displays. Again, the mean in 3D is quite high, 17 point. So 15.8, the mean in 2D is 5.73, nearly three times as much, not quite as high. Again, it's sorted by difference between 2D and 3D score. So very significant difference in performance in 3D in both cases. So if you want to design a game to play on a 3D display, you want to try and design these decisive depth judgments to be either easy or hard, depending on what you want to do. In the other case, as hard as possible. So you get much better scores in 3D display. A couple of other things we looked at. We looked at the performance compared to the desktop display with the 4-meter display. So we compared these scores with these ones. And there's no statistical difference between them. So you're playing in 2D on a desktop display, you have the same performance as playing in 2D on a projection display. With the 3D scores, it looks like there might be a difference. There's a slight wiggle here. And some of the players are getting much lower scores in 3D. And they did report in the discussion that they could write down afterwards that somehow they found playing on the large screen harder. But actually, when you do an analysis and compare statistically these, there's no statistical difference between these two sets of scores. So as far as we can tell, this set of scores and this could be from the same performance distribution. And I think that's all to see about those for now. So conclusions. Overall in games, we presented an average overall of the games. I've just mentioned we found sorry, 3D giving you about two times better performance, 94% performance increase compared to playing in 2D. We didn't make a game that was impossible to play in 2D, but nearly so. And if you skip down to the bottom, we're coming back to the same design challenge this year. And we've got a quite a bit more sophisticated game that may, in fact, be impossible to play in 2D. And we've very carefully designed in here these decisive depth judgments. When a player is making depth judgment that's critical to their scoring, we remove as many as possible with these depth cues. Slightly different to the kind of thing Charles is doing, where he's trying to add in as many depth cues as possible. But there we go. And if you want to know, this is a question we got asked, and I'm surprised you got asked. But if you want to know, our stereo 3D games playable on cell phone displays, the answer is yes. And we don't see a performance difference between that and desktop or really projection size displays. Okay. Thank you very much. Thank you. Do we have any questions? Dave? Yeah, I'm just wondering if you did any tests on the game. Yeah, I'm just wondering if you did any tests on more intense games like 3D person, 3D, first person shooters, and those kind of things where depth is a factor. But there's a great debate about whether stereo vision really helps that or not. Sorry, I didn't get the question. Did we do any tests on? On first person shooters or other games where depth perception is a significant factor. And yet there's a big debate on whether there's value from stereoscopic. We are doing some tests on a shooting type game now, but it's not quite a first person shooter. We did do some tests in a game. William Peck, who did the very first one in 2004, did two projects. The first one, he did an archery game where you had to shoot in an angle. But there was no benefit whatsoever in 3D for that because the only judgment was an angular one. So if you're only making an angular judgment, then there's no real. You just need to know which direction to shoot, not how far away it is. Yes? I teach video game design, and I'm curious if there is any information that your studies would show how a game designer to use this information in the design of a game. How a game designer could use the information? How could use this information? There's a much longer discussion in the paper about how we did the design and how all the different cues might and might not add. So hopefully that will help game designers use it. They may be trying to do the opposite to us to make the game playable in 2D and 3D. Okay. Anyone else? Raise your hand again. Did you notice any correlation which order the students were playing? I mean, if they first play 3D and then the 2D, did they score better when they had some knowledge already about the depth of the play? Okay. Good question. No, there wasn't any difference in the order, but we did measure people's gaming experience. So we asked everyone to rate their gaming experience. Now, for those who had lots of gaming experience, I think the word we used was vast, they were able to figure out some cheats in the 2D mode to get slightly better scores, but they still were twice as bad in 2D as 3D, but they were just able to move the spaceship to the back and make judgments about where the hoops were. I could have thought about it more than the more naive plays. So, yeah, it was an experience issue. Okay. Thank you very much. Thank you.
|
We report on a new game design where the goal is to make the stereoscopic depth cue sufficiently critical to success that game play should become impossible without using a stereoscopic 3D (S3D) display and, at the same time, we investigate whether S3D game play is affected by screen size. Before we detail our new game design we review previously unreported results from our stereoscopic game research over the last ten years at the Durham Visualisation Laboratory. This demonstrates that game players can achieve significantly higher scores using S3D displays when depth judgements are an integral part of the game. Method: We design a game where almost all depth cues, apart from the binocular cue, are removed. The aim of the game is to steer a spaceship through a series of oncoming hoops where the viewpoint of the game player is from above, with the hoops moving right to left across the screen towards the spaceship, to play the game it is essential to make decisive depth judgments to steer the spaceship through each oncoming hoop. To confound these judgements we design altered depth cues, for example perspective is reduced as a cue by varying the hoop's depth, radius and cross-sectional size. Results: Players were screened for stereoscopic vision, given a short practice session, and then played the game in both 2D and S3D modes on a seventeen inch desktop display, on average participants achieved a more than three times higher score in S3D than they achieved in 2D. The same experiment was repeated using a four metre S3D projection screen and similar results were found. Conclusions: Our conclusion is that games that use the binocular depth cue in decisive game judgements can benefit significantly from using an S3D display. Based on both our current and previous results we additionally conclude that display size, from cell-phone, to desktop, to projection display does not adversely affect player performance. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30604 (DOI)
|
Okay, hello everyone and thanks for coming to my talk. I will present you our work on temporarily consistent disparity estimation using a PCA dual cross bilateral grid approach. Now that's a whole mouthful of words but through the presentation I hope I'll be able to clear that up. This is work done by Jin Zhu while she did her graduation project at Philips and was done together with Kierar Tahan and myself. So by disparity we mean the shift between the left and right image of a stereo pair that you get when you take two pictures from a slightly different point of view and there's a strong resemblance with the depth maps given by the previous presenter. Here they actually represent actual pixel shifts between the left and right image but just as in the previous presentation you can see that white surfaces come more to the foreground, black surfaces are more in the background. Now why would we need disparity maps if we already have stereo information available? Well it can be for multiple targets. One of them is if you have multi-view displays like for example most auto stereoscopic displays there are multi-view and then you need to generate like a multitude of views both between the original stereo pair and outside the original stereo pair and then you would typically use the depth information. Similarly if you want to adjust the depth effect on stereoscopic screens for many movies I think the depth effect for me personally is often too strong when I watch them on a TV and I would like to be able to reduce it. That's also an application where you would use disparity maps. Now a large number of disparity estimation algorithms already exist and many of them are gathered and evaluated on the Middlebury website which is well known to many researchers in the field. The best algorithms there now achieve an accuracy of around 4% of bad pixels which is quite good. On the other hand if you get to those numbers it often starts becoming difficult I think to quantify the quality in terms of a percentage of bad pixels. But yeah it is a measure and you have ground truth data available for that for those data so that's the best we can do as evaluation for now I think. Another indication that the quality of those algorithms is achieving an acceptable level is that more and more algorithms get implemented in hardware and are released in actual products. Many stereoscopic TVs now allow you to change the depth effect which means that some kind of disparity estimation is done in most of them. On the other hand my... So why would we still do yet another disparity estimation algorithm? My impression is that the quality is still lacking in some respects. One of them is as I said the location of those bad pixels is starting to get very important. If you do a poor estimation within let's say a blue sky it doesn't matter that much. If you do a poor estimation on the edges of an object you immediately see the effect if you watch the generated views out of that. And then this Middlebury dataset is a set of still images and most people work only on still images. I was very happy to see the previous speaker also talk about video. I think the temporal consistency in video disparity maps is very important because otherwise you start seeing flickering and a lot of artifacts that are not visible if you just show a number of frames one by one. So if you look into those 150 methods that are evaluated now on the Middlebury website you can... They are generally classified in global and local methods. And I will look here mostly into the local methods because they are generally more easily transformed into something that can be implemented in real time. Most of those algorithms are subdivided in four main parts. First there is some calculation of an initial cost which can be seen here below. This is... This leads to an initial cost volume where you have a matching cost for every pixel and every potential disparity value. And then those data are aggregated or filtered to get somewhat more smoothness and try to extend certain information to the neighboring pixels. And then from there the disparity map is calculated. So until here we work with a cost volume. And then in the third step we actually reduce this to a two-dimensional disparity map. And then after that most algorithms still perform some refinement step which could do for example some correction in areas that are occluded around edges or similar things. So if we would just from the initial cost go directly to a disparity computation we would get an image like this one which I think you'll all agree is by far not smooth enough to get close to this ground truth depth map. So let us look more closely into a few existing approaches on which we based our research. So the first one is the adaptive support rate approach by Jun and Kwon. And they for the aggregation step they use a bilateral filter kind of approach where they filter costs values from neighboring pixels based on both the color difference with the central pixel and the distance to the central pixel. They use a weight for the left image, a weight for the right image and then these are the initial costs that were calculated. Based on this work Richard and others have developed in 2010 a dual cross bilateral grid approach where their main goal was to get to a real time implementation of something similar to the previous approach. What they did for that is they changed the cost or the aggregation function somewhat by using a number of Gaussian functions. This one is the luminance difference in the left image between your two pixels. This is the luminance difference in the right image between the two pixels and then there's the distance to the center pixel and again your original cost. Now once they got to this formulation it lends itself very well to applying a cross bilateral grid implementation and that can be easily translated to processing on a graphics card and therefore can be achieved in real time. So they get to a four dimensional cross bilateral grid where the four dimensions are for the horizontal and vertical position and for the luminance difference in left and right image. Then we get to our approach. In our approach for the initial cost calculation I'll go down these four steps that I indicated before. We used the absolute color difference as well as the gradient difference and made a weight at some of those two because they seem to contribute somewhat different information. In the gradient difference you get some more edge information and that's exactly where you can also do more precise matches. Then for the cost aggregation we use a similar function as the dual cross bilateral grid approach but instead of having the luminance difference in the left and the right image where it was clearly seen both by us and some other researchers that the luminance difference from the right image doesn't contribute that much anymore. We wanted to take advantage of this four dimension in the bilateral grid to use some other data and what we did is we performed a principal component analysis on our color left image and we used the first two principal components of the left image disregarding the luminance and color data of the right image for our cost aggregation and used that for cross bilateral grid. Then afterwards for the disparity computation we used the winner takes all approach meaning that for every pixel value you just choose the disparity value with the lowest cost and you perform the quadratic interpolation to get some sub pixel precision out of it. Then afterwards in the refinement step we also did some existing approaches first performing a left right consistency check and filling in those occlusions and afterwards we performed a weighted medium filter. As I said I was very concerned also with temporal behavior and instead of filtering our disparity maps we felt that it was much better or the results were much better when we performed the recursive filter on our bilateral grid itself. So what we did is we for to calculate our grid for the current frame we weighted the bilateral grid for the previous frame and added to that the new data that are available in the current frame and in that way automatically got some more consistency and while still keeping the edge alignment and so on which are very important in our very nice features of the cross bilateral grid. Now there was one caveat in there because as I said we are using a PCA for the dimension of our bilateral grid. If you just blindly update at every frame you are kind of combining different unrelated data because your PCA basis changes from frame to one frame to another. So what we did is we keep track of how much change there is in the PCA analysis and only when there was a larger change than let's say 5% we updated our grid with the new data from the new basis and otherwise we kept on working with the old basis because the changes would be minimal anyway. So that brings me to some results and here we compare on the left the dual cross bilateral grid approach in the center the proposed approach that we developed and on the right the adaptive support weight approach which was the first one that I presented and I think both from the images here at the bottom and from the counts of bad pixels you will be able to see that our approach while not exactly matching the adaptive support weight I think it gets pretty close to that and it's definitely way better than the performance of this approach while our approach still keeps roughly the same computational complexity. In my opinion the differences that we still have here between those approaches are at the level where we should start using other measures as well to distinguish between the different approaches. Here are some more images where we can see again the difference the different approaches on each of the images. I think you'll probably agree that these results look pretty smooth and pretty similar to the adaptive support weight approach and then as we also emphasize this video behavior we also performed an experiment on two videos unfortunately I don't have the video data themselves with me but we did one animation movie or one segment of an animation movie of about one minute and one piece of some live action video and we asked a number of viewers to watch it on an autostereoscopic display and to give their feedback on three different topics one is the overall 3D impression which is given here to the left second is the temporal coherence and third one are the depth artifacts and we asked them to evaluate it and the blue color is labeled excellent red is good and it goes down all the way to the purple here. So I think my impression from these results is that for the animation movie all videos performed fairly equal and fairly well. For the real live action video you see more variation and there you can see that our algorithm performed somewhat better and gave somewhat more stable results than most of the other algorithms. So to conclude I have presented you our new automated disparity estimation algorithm which is based on the dual cross bilateral grid approach by Richard and others and on the adaptive support weight method. One of the main differences is that we changed the left and right luminance image by the PCA on the left image where we keep those two principal components and I hope I've been able to convince you that we achieved pretty good results both on still images and video while still keeping something which while we did not implement it in real time ourselves should be feasible in real time. So that concludes my talk. So question or comment please. Hi, David Carrigan from Trinity College Dublin. I just want to ask how does your temporal consistency framework account for local motion in a video? So how is it robust to local motion? It is somewhat inherently dealt with in the bilateral grid approach in the sense that if there are some let's say if there it is an issue if you have very fast motion but if it's kind of smooth motion not going too fast then typically by applying some filtering on your bilateral grid you already spread the disparity values to the neighboring areas of the image and you can extract those disparity data already. Maybe one more? No? Present to the speaker. Thank you. Thank you. Thank you, dangit. Thank you.
|
Disparity estimation has been extensively investigated in recent years. Though several algorithms have been reported to achieve excellent performance on the Middlebury website, few of them reach a satisfying balance between accuracy and efficiency, and few of them consider the problem of temporal coherence. In this paper, we introduce a novel disparity estimation approach, which improves the accuracy for static images and the temporal coherence for videos. For static images, the proposed approach is inspired by the adaptive support weight method proposed by Yoon et al. and the dual-cross-bilateral grid introduced by Richardt et al. Principal component analysis (PCA) is used to reduce the color dimensionality in the cost aggregation step. This simple, but efficient technique helps the proposed method to be comparable to the best local algorithms on the Middlebury website, while still allowing real-time implementation. A computationally efficient method for temporally consistent behavior is also proposed. Moreover, in the user evaluation experiment, the proposed temporal approach achieves the best overall user experience among the selected comparison algorithms. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
10.5446/30605 (DOI)
|
Hello? Am I on? Yeah. Laser? No. That's a weak laser. Right. Okay, this is the control. Okay. Got it. In the summer of 2010, I was working for Air Force Research Labs. We were asked to take a look at the air refueling problem, or challenge, I should say. Wait, boom, wrong one. I should do these in the other hands. Okay, yes. Now, in the air, this is a KC-10 tanker. The boom operator who controls the position of that boom sits up here. He's looking out a window right here. He's about 20 meters from the receptacle on the receiver aircraft that pulls up. The job of the receiver is to hold his position steady with respect to the tanker. The job of the boom operator is to plug the hole, which is about four inches diameter at 20 meters away. 10 of 10 experienced boom operators indicated, well, the overall goal is to do this training on the ground in a flight simulator. They currently have been doing this for about 50% of their training for some years. 10 of the experienced boom operators we interviewed indicated that they need a very significant improvement in performance of the ground-based trainers. They just can't tell how far away things are. The current ground-based trainer is essentially a flat panel display, non-stereoscopic, 1.2 meters away. So they have no stereoscopic cues right now. Our expectation from the literature is that humans ought to be able to see somewhere in the range of 3 to 10 arc seconds for young people with normal stereoscopic vision. At 20 meter working distance, that corresponds with 0.3 to 1 foot of tolerance. However, there's a number of seasoned display professionals in the simulation training industry who are convinced that you wouldn't use a stereoscopic display if your object distances are more than 10 or 20 feet away. That's based on their experiences in the virtual reality world. So as quickly as practical, I wanted to reconcile what the vision science literature is telling us with what the practical display guys are telling us. So as quickly as we could, we put together a study where we mocked up eight different combinations of display configuration. In these combinations, we could turn stereo on or off, image distance. We had a flat, basically, if we were projected 1.2 meters away or a system that was essentially 20 meters away. And we could turn head tracking on or off. Okay. Here, I got to get used to this. All right, back up one. Now we built two physically separate display systems. This is a very large mirror. You cannot see in that photograph. It's about a 7 foot tall, 5 feet wide type thing. The observer looks at the image of the screen through that mirror. What that does is it sets the focus distance and the divergence distance at 20 meters away. On the other hand, they could turn around on this platform and look at this direct projection, this direct view screen, which is 1.2 meters away. So what you're seeing here is the back of that screen. It's flash photography, so it washes the contrast out. These two images were the same contrast and luminance. Or they could look down into that black hole here, which is down into here and see the collimated image 20 meters away. Procedure was we had experienced room operators estimate the distance between the nozzle on the end of the boom and the receptacle, where randomly hovered the aircraft at different distances and got their distance estimation performance. Punchline of that study is use of stereo versus non-stereo. Non-stereo is what they currently do in their ground-based trainers. Very large statistically arrival improvement in their performance at distance estimation, even at 20 meter working distance. Comparison of a collimated or setting that image distance optically to 20 meters away also improve performance, although stereo was the big hitter variable here. Collimation significantly improves comfort. It eliminates that accommodation virgins mismatch. So the conclusions from that first evaluation are the combo of stereo and collimation is clearly indicated for this job. According to their current ground-based boom operator trainer, we had one over 4.3 times the error, very highly statistically reliable. The interesting thing is for the collimated stereo condition, their distance estimation standard deviation was about 0.61 feet, which corresponds with a disparity threshold of six arc seconds, which is just what the vision science literature tells us ought to be. So next my job became to how do we write a requirement so the Air Force can buy several of these display systems that guarantees we're going to be successful. The Air Force has a history of attempting to buy stereoscopic boom operator trainers that have not worked out. Six arc seconds is a very small fraction of the pixel spacing. The difference between your right and left eye images is only 5 to 10 percent of the pixel spacing. Even if you have like a one arc minute display pitch, which is very fine. So we fully expect that any kind of image processing that can screw up the relative positions of objects and edges or points of light edges, polygon edges, or lines can really kill the effectiveness of the stereoscopic presentation. So we expect spatial resolution to be important, anti-aliasing, and image warping. Image warping has done a lot in flight simulation. We've not found any papers that quantify the effects of those variables. If anyone knows of a paper, please email me because I've been looking long and hard for papers that measure this stuff. For the next study I set up a mirror stereoscope, had lenses and mirrors that were steerable. It allowed us to set focus and virgins angle at exactly 20 meters. No crosstalk between the eyes. We had apertures in there. In this evaluation their job was to tell us whether this rectangle is in front of or behind the plane defined by these two edges. So it's a lot like a hard doleman type study. In the evaluation we could systematically vary a display resolution or pixel pitch and the anti-aliasing kernel width. So here's an example of relatively coarse pixels and not so good anti-aliasing. This would be very high spatial resolution, pixel pitch of a half of an arc and good anti-aliasing. So the punchline from this second study is these two plots show us exactly the same data. Some people like contour plots, some like response surfaces. But the punchline from this is yes, with practical, affordable, currently available display systems you can achieve eye-limited stereoscopic depth perception. So our four observers, we get an average of 5.5 arc seconds disparity threshold. Now the interesting thing is as we go up, as pitch gets coarser, more and more coarse, performance decreases, the threshold raises, but not nearly as fast as it does if you're doing insufficient anti-aliasing. In other words, this slope is much gentler than this cliff. So yeah, if you email me I'll send you this paper and you can see a lot more detail about how these data were collected. Another interesting point is it's far cheaper. Right now flight simulation lives right about here, around 2 to 2.5 arc minute display pitch. And who knows what for anti-aliasing? Probably standard hardware-based, you know, NVIDIA graphics board or ATI graphics board only anti-aliasing, which we estimate they're living right around in here. It's far, far cheaper to turn up the amount of anti-aliasing you are that you're using than it is to decrease display pitch. As you decrease display pitch, the number of image generators and the number of projector pixels and the alignment system for all that, those costs increase very quickly. The interesting thing is we found no papers that quantify the effect of these variables on stereoscopic display performance. Again, if you know of any, please contact me. So this is the disparity threshold data. We also measured ratings of comfort. It's a similar function, but you can see the slopes of these things are different. I'm going to flip back to performance for disparity. Here anti-aliasing is far more powerful a variable than is display pitch. For comfort, they're more balanced. Some of the HMD or virtual reality literature of the mid-90s, see we went up as, of course, as three arc minute pixels. A lot of those studies in the 90s were done at 6, 7, 8, 10, 14 arc minutes. Some of those early head-mounted displays and stereoscopic displays on which a lot of the recommendations are based were way off of our map. Okay, conclusions from evaluation two. Yes, it is possible to achieve eye-limited stereoscopic disparity or depth perception performance with affordable and currently available display technology, but only if you get the anti-aliasing under control. I believe the reason there are previous efforts with the Air Force to acquire these systems, I think they failed, one of the reasons is they didn't even, they go anti-aliasing? Why is that important? The typical hardware only solution does not provide you sufficient anti-aliasing. I'm not saying it's not capable of it, but in the way it's been used for the past few years in simulation training, it has not been sufficient. We don't have a metric in the simulation training industry for quantifying the amount of anti-aliasing somebody's doing. How it's being done by the supplier is usually a trade secret. So what we need is a metric that doesn't require we know how they do it, but it is a way we can measure whether or not they're doing it and they're doing enough of it. So we're now working on the development of an objective metric of anti-aliasing sufficiency. Radial test patterns have been around for years for the assessment of anti-aliasing, which usually you have a pair of human eyeballs and a brain that analyze the pattern. I'm very interested in a camera-based metric. It's very repeatable. That same test pattern can also be used to measure system resolution, but that's not the topic of this paper. So here's an example of clearly unacceptable, where stereoscopic disparity thresholds are significantly raised. That's the amount of more noise you see in it. That one's clearly acceptable. The problem with human eyeballs on these kinds of targets is there's not a real clear cut line there between acceptable and unacceptable. It's hard for a human to make that assessment. So a candidate test pattern. This is a test pattern. You have a focus aid. Basically, you put this pattern up in your flight simulator. Sit down at the eye point, take a consumer color camera. It's very affordable. Take a picture of that. visualize it on a laptop and get your answer. That's where we're trying to go with this. We have these alignment marks here allow the software to scale that thing so we know the spatial frequency is involved. We can check and see if they've done sufficient gamma correction of their display system. If not, we'll reject the metric. Modern cameras, $400 camera can put 8, 10 camera pixels per display pixel. So we're no longer bound by the capabilities of cameras. So the basic way to analyze this image is you take that pattern. We run a radial Gaussian or a donut shaped window on it to look at the area of interest. We then take a difference of Gaussian, circularly symmetrical linear spatial filters that throw out most of the radial pattern and keep most of the more a pattern. And the metric is simply the RMS value of the residuals in that image. There's 50 ways you might construct the actual metric that looks at that image and derives a unitary metric from it. I'm interested in anybody inputs from anybody. So we're basically collecting candidate metrics right now and I'll run them through their paces. Okay, sources of variance. Okay, next I want to see how well that basic metric works. In other words, if you run that metric over and over on the image, how repeatable is the metric? So I did a series of measurements, several series of measurements, and camera on a tripod versus a camera being handheld. When you move the test pattern around relative to the pixel structure of the display system, the nature of that more a pattern changes drastically. The hypothesis is that the overall magnitude of the spatial sampling artifacts is pretty constant even though the pattern of the more a is changing rapidly. So we had pattern held constant relative to the pixel structure on each photo or pattern moved every time. So what we're trying to do is tease out the sources of variance in this metric. So here's the answer, hand holding versus not contributed to standard deviation of 0.18, moving the pattern relative to the pixel grid. So what this is saying is about 1% of the range of this metric is the noise introduced by moving the pattern relative to the pixel structure than all other sources. So since that's a small fraction of the range of the metric, the metric ought to be useful for detecting changes in the amount of antigoesine you're doing. Next did a change the anti-acid filter, the amount we were doing to 16 different levels, took 10 camera shots at each level. So these are actually clusters of 10 points here. So you can see the standard deviation is a small fraction of the range of the metric. So the metric has the capability of seeing quite small changes in the amount of filtering you're doing. Down here it's 0.25% of the range. Down here, what we care about is right down here, here's where you start hammering visual performance on the stereoscopic task. Or in the boom operator task, we need them to be down here. Here's the disparity threshold from the previous study as a function of anti-acid filter width. So the correlation between disparity threshold and this metric is 0.994. So far so good. Next we're going to get a much richer data set where we wiggle more variables like we will turn image warping on and off as well as play around with the anti-acid kernel width. So we've proposed and described a metric of anti-acid sufficiency. It's measurable using a common handheld consumer camera, inexpensive. Seems to correlate so far very well with disparity thresholds. And since the metric variance is a small fraction of the range, I think we can measure very small changes in the design of the imaging system with it. Question is. Thank you, Charles. Okay. Time for a quick question. Which one? Hi Charles, thank you for that talk. I'm curious, do you think the luminance level of the display is going to have an effect? Like do you have a feel for how much refueling is done at night versus during the day? They do a lot of refueling at night. That's the easy answer. The other question is how will luminance of the display system affect the ability to measure aliasing? It probably will. I did these tests at about 70 foot lambs, I think it was. In the next run, I want to measure it as a function of luminance as well to assess that. So far so good. The standard deviation is a small fraction of this thing, so I suspect it can probably tolerate pretty little luminance levels. I close the aperture of this camera down as far as I could so that it would be diffraction limited so that the camera would not introduce aliasing artifacts. I'd like to, on the next round, I'd like to find a camera that has an antihisting filter rock-tically built into it, but I've got to find one of those. Okay, thank you very much. Thank you. All right.
|
This paper describes the development, measurement, computation, and initial testing of a metric of antialiasing sufficiency for stereoscopic display systems. A summary is provided of two previous evaluations that demonstrated stereoscopic disparity thresholds in the range of 3 to 10 arcsec are attainable using electronic displays with a pixel pitch as coarse as 2.5 arcmin, however, only if sufficient antialiasing is performed. Equations are provided that describe the critical level of antialiasing required as a function of pixel pitch. The proposed metric uses a radial test pattern that can be photographed from the user eyepoint using a hand held consumer color camera. Several candidate unitary metrics that quantify the spatial sampling noise in the measured test pattern were tested. The correlation obtained between the best candidate metric and the stereoscopic disparity threshold model from our previous paper was R2 = 0.994. The standard deviation of repeated measurements with a hand held camera was less than 0.5% of the range of the metric, indicating the metric is capable of discriminating fine differences in sampling noise. The proposed method is display technology independent and requires no knowledge of the display pixel structure or how the antialiasing is implemented. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.