doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/20434 (DOI)
Yeah, so my name is Jeff Johnson. Just a quick, a little bit about me. I've worked on the GeoNode project since 2010. I founded a consulting company called Ternodo, and I worked at OpenGeo Boundless for quite a while. And I'm also a consultant at the WorldBank's global facility for disaster reduction and resilience, and I founded this aviation augmented reality startup. And I've got two boys and live in San Diego, and that's my GitHub URL. I have like a couple, let's say, disclaimers. One my affiliation on my badge is Ternodo, but I actually consult for the bank as myself. And I have this talk is based on a research report that the World Bank Commission, the GFDR Commission by Carl Fogel of Open Source Strategies, and I have a copy of it here, and I had to get like special dispensation from the World Bank in order to print copies that because it's not quite finished. I think the banks used to producing reports that might move stock markets or top of governments, which this is certainly not one of those. But anyway, I had to get the special commission to do that. But anyway, so Carl Fogel is the gentleman who wrote a book called Producing Open Source Software, which is, I would consider the Bible of how to run an open source project. Many of you probably read it. And I should just stay right up front. This is not at all a technical talk. I'm going to talk for like one slide about the project itself. It's really about reviewing the history of the project and who contributes to it and who funds it and how both of these things evolved over time. And the purpose of this report was really to look at the banks return on investment into open source software. This is one of their first large investments into open source software. So we're looking at the project from an economic perspective and then really trying to get to best practices when big organizations invest in open source. And I'd really encourage lots of other projects to look at their sustainability. The last slide that you had up was about sustainability. And I think it's hugely important to look at your project from a sustainability perspective and look at the best practices that have emerged in keeping your project sustainable and funded over time. So this is the only slide about the technical nature of GeoNode. So if you're hoping to get some technical bits, you're in the wrong talk. But basically GeoNode is a content management system for geospatial data. It's based on Python and Django. It's basically a user facing spatial data infrastructure. It's now used by hundreds of organizations around the globe from the very biggest GIS agencies on the planet to little tiny ones. I just came from Sri Lanka last week and gave a workshop there. It's based on stable, mature open source tools and frameworks. I guess we've not tried to chase the latest fad and stuck with things that are relatively secure, I mean, mature and stable. So obviously, Google, OGR, PostGIS, GeoServe, OpenLayers, GXT, Django, Bootstrap, jQuery, Angular, all that sort of thing. So just a brief history of the project. We started in 2009 with a little bit of seed funding from the World Bank's global facility. Again, global facility for disaster, reduction of resilience. We've changed those RRs a few times. And the initial development work was done by Open Plans, OpenGeo, which is now boundless. And in January 2010, there was the Haiti earthquake. And the World Bank, UNOcha, and others really realized they had to coordinate on data sharing. I remember doing a lot of image processing and data distribution. I think they really realized at that time that this is a real problem, like sharing data between agencies in an emergency situation like that. And there was some good efforts there to get data sharing between these big agencies. But I think that was the kind of pivotal turning point, certainly with things like OpenStreetMap and using imagery to digitize an OpenStreetMap in a hurry. It was a pretty big realization that they needed to up their game there. So in 2010, later in 2010, GFDR formed a team that was devoted to GeoNode and OpenData and hired a full-time developer. And later that year, the GEM Foundation, which is global earthquake model, which is an interesting project that's project funded by the reinsurers, the insurance companies that insure the insurance companies. And they began to use GeoNode as the basis of their OpenQuake platform. And then the first presentation about GeoNode was at PhosphorG in 2010. GeoNode 1.0 was released in December 2010, which is about when I joined OpenPlanes, OpenGio. And in 2011, Harvard University's Center for Geographic Analysis used GeoNode as the basis of their WorldMap project. And later that year, GeoNode was used as the basis for the RISCO platform in Indonesia. And then we had our first road mapping summit in 2011, which I'll talk about in a little while. The MapStory project was founded in 2011 and uses GeoNode as its foundation. And in October 2011, the emerging sort of OpenData Disaster Risk Management Group at GFDRR founded the Open Data for Resilience Initiative, or OpenDRI. And the first real sort of big code sprint was held in February 2012. Version 1.1 was released. More companies later in 2011 started offering commercial support, Ithaca, SpatialDev, and of course OpenGio began to expand its portfolio of companies that hit our customer portfolio. And late in 2012, the Army Geospatial Center, Army Corp. Engineers, began the road, JCTD Joint Capability Technology Demonstration using GeoNode. And this is where GeoGig and other projects like that came from. And in 2013, GFDRR made a big push in deploying a ton of GeoNodes in Caribbean countries. In 2013, we had a sprint in Alexandria. I could go on, but these are the basic data points in the history of the project. And the last one there will make Jody put a smile on Jody's face. I guess we've got a week, so we're going to complete incubation here in the next, if all goes well, in the next week. And we'll release, getting onto a regular release cycle here. So that's just a brief history of the project. And so, but mainly I'm going to talk about this research report. And I have many, several copies here if you want to take a look at it. But basically, the GFDRR engaged with Carl Fogel to sort of look at the history of the project. The WorldBanks is making efforts in both to sort of codify GIS, enterprise-wide GIS policy, and also to codify open source policy. And so this is one of the research reports that's been, that's gone into the preparation of these sort of enterprise-wide open source policy. This is kind of a case study. And so Carl and his team did phone interviews with about 15 people from 10 different organizations. They looked at, they did like quantitative analysis of our issue tracker, mailing list, source code repositories, bugs, bug trackers, all that sort of thing. They also did some quantitative data analysis from non-public sources, basically being a bank, the WorldBanks and GFDRR is very interested in the sort of return on investment for the money that they've spent and using that to justify further investment. So they also did some qualitative look at the interactions between participants and bug tracker activity and all that sort of thing. So this is sort of methods. And this is, I sort of, these are the principles that emerge, have emerged from their investments in guiding the GFDRR's investments in open source. And so these are kind of really important things that I think have led to the success of the project and its sustainability. One is that they did both hire internal people and contracted out, contracted out to other organizations. So outside developers really increased the commercial viability and sort of social surface area of the software project. And internal staff can contribute to developing the software and provide the kind of day to day oversight of the outside contractor. So I think that's been pretty important in the way they've run the project. And then they've also really done a huge amount of sponsoring of in-person events. So all over the world, we have these road napping summits and code sprints and GFDRR has been really instrumental in pushing those people meet, they learn, they collaborate and I think we all collaborate much more effectively than we could if we just worked together only remotely. And then they also use their sort of institutional cache to create partnerships. So a lot of staff time was dedicated to making connections and with other institutions and peer institutions, the UN and other big organizations. And then many of these organizations have gone on to invest in the project themselves. And then also there's a huge effort to train users and developers. I'll talk a little bit more about that. But I've been now to probably 15 countries all over the world to do trainings. And GFDRR really encourages client countries to deploy GeoNode and invest in these deployments and allocate staff time. And I guess I should, a lot of people I talk to have a sort of misconception about what the World Bank does. So just a tiny bit of background, their development bank, their tax dollars in Europe and the US and many other big countries are used to loan money to developing countries for building bridges and roads and flood control infrastructure and things like this. And those loans are generally amortized over a really long time. And the goal is to raise GDP and promote investment. So and generally with these projects, they have a technical assistance package that's at while we're going to build a road and a bridge, we're going to also work to improve your ability to run the GIS that or develop a GIS that runs the project. That's what the bank does. And so the World Bank's made a GFDRR has made approximately one to one and a half million dollar investment over seven years. And like I said, they simultaneously hired outside firms and internal staff to do the work. And they've provided in country investment to train governments, academia, local firms. So I just came, like I said, I just came from a training in Sri Lanka where there were many university professors that are some participant in GeoForAll, which is fantastic. There are science professors and some local firms, basically they get paid to show up to the training in many cases or and governmental agencies and they really go out of their way to sort of grow the community. And then like I said, they really pursued partnerships with other organizations in the DRM space, Disaster Risk Management, and they encouraged them to collaborate and co-invest in the platform together. And we also collaborate very closely or technically with other stakeholders outside of the disaster risk management space and on the roadmap for the open source project. So you can see the breakdown there, a fairly equal chunk of outside development and internal development and also support services and outreach training. And that's where that roughly one and a half, one to one and a half million dollars have gone. So obviously, we all know these kind of best practices in open source development, right? From the project as an open source from the very beginning, encourage and engage other organizations commercially and as partners and invest in collaboration infrastructure and these community building events and in-person events. And I think the really key thing is use your funding, like as a big organization, they can use their funding choices to signal to peer institutions. We're invested in this, we're putting our money in our institutional cashier behind it, you should do the same. So as a quote from the paper, so at this point, you know, it's future as a public good is secured and it's now used and maintained by many other organizations, governmental, non-profit and commercial and GFDR can benefit for a long time after, even if they stop investing in the platform. So one thing's really important, I think, is to really choose the right contractors and partners. So I think it's really important, I think it's been really important to the success of the project that the early contracts went to an established industry player, not to a random small firm but to somebody that had the sort of cashier in the community to, you know, if, you know, that's how I came to the project, open geo is doing this, it's probably like something I should pay attention to, right? And so it really drove that kind of socialization among the potential early adopters for free. And GFDR also pushed this like fully open development strategy from day one. They pretended like they behaved as a fully open source project even before we had running software. I mean, we did, you know, had issue trackers and mailing lists and, you know, did road mapping well before we ever had running software. And working with open geo was super instrumental in that. And then very quickly, open geo had its own motivation to begin thinking about how to, you know, about commercial demand and other business opportunities. You know, essentially as soon as people realized, oh, the world banks using open geo to develop this software, maybe we should like talk to open geo about using that same software in our organization. So they had commercial opportunities like very quickly as soon as they started that. And it's kind of obvious, but open source projects are healthiest when the contributing organizations all work together, but they're also pursuing their own objectives that just leads to a healthy project. And, you know, what I've seen in Carl's got a lot more experience than I do in and outside of the open source space. And then, you know, the more organizations that are all working together and involved in a project, the greater the incentive there is to improve it and keep it stable and viable. Right. So if you've got a project that's only supported by one organization, they can just up and walk away and the product's not very viable. But as soon as there's many organizations contributing and supporting it, it becomes, you know, more sustainable and maybe one organization walks away, but they may come back at another later date. So the other thing is that working with open geo and just in general following kind of best practices, GFDR has really pushed these like developer connections and communities. There's a whole lot of you in here that I know that I've worked with for a long time. So we've really built all these like very personal relationships between the developers and the users and this kind of leads to greater resilience of the project as a whole. And as project developers change organizations, the knowledge and expertise are carried with them. I'm going to start talking faster there, Steve. As project developers and change organizations, their expertise goes with them. It's not a developer's position and influence in the community is not really tied to their employer. It's there belongs to them. And then also, like I said, went way out of their way to form partnerships. So initial partnerships and organization early adopters signal to other organizations about the viability of the project and you get this kind of snowball effect where accumulation is more and more organizational contributors join. So the credibility of the bank and GFDR really drove that early involvement by other organizations. And as the ecosystem grows, other organizations have a vested interest to bring in new partners and promote the stable growth of the project. So this is the kind of in 2011, 2010, 2011, these are all the organizations that they have supported the project or participated in it either as end users or contributors. We started adding more in 2012, 2013, and then a whole bunch more in 2014 and 2016. And you see, if you take a look later, there's some of the largest GIS agencies on the planet in that list. We also organized these collaborative road napping events, which were really cool. We brought all the stakeholders together to discuss their own priorities. And each organization presented their implementation and discussed their goals in the short and medium term. And we agreed on sources of technical debt that could be tackled together. I think this is really important, but I don't have enough time to talk about it. But it's really hard to address technical debt in these projects because nobody wants to pay for you to do refactoring generally. But then we did brainstorming on development priorities and worked towards a shared roadmap and tried to reduce duplicative effort and promote awareness between organizational priorities and then immediately used collaboration tools to document this roadmap and use that roadmap to set funding priorities among the organizations. I'll just skip over this. But basically, we have these consensus exercises where everybody agrees, like, these are important things. We're going to share funding. And it's kind of a menu that the stakeholders can use to contract future work. Also, GFDR encourages the banks, client countries to use GeoNode in their projects. And we encourage them to use local and international contractors to produce the data and collect it and publish it. And then these local geospatial firms are really engaged in, they do the in-country support and expertise. And then many of them go on to use the project in their other work. And so now we have this large global ecosystem of these small providers, small firms that use some of them you are in here now, use GeoNode in their projects. And many of them have gone on to become core contributors or to provide contributions. I'll just go through this really quick. But we also promote this culture of documentation, which I feel like a, I don't know, pretty hypocritical after how bad the docs were during our workshop. But really promoting the culture of documentation, that documentation is really, really important. And having a nice website, these things like pay really big dividends in promoting the project, getting people on board very quickly. And so we've had this growth in this commercial provider ecosystem in 2009 and 2010, sorry, that should say 2010. There's only open GeoNode. And now there's this large list of companies that offer commercial support. And then here's the look at the commits. But let's just go through, I'll slow down. So the best practices are, number one, run the project in the open from the very beginning. Like I said, begin behaving like an open source project right from day one. Really encourage other commercial organizations to diversify their investment and grow that commercial provider ecosystem. And really take an active interest in evangelism and communications, especially at the beginning. Getting the word out drives usage and adoption and many users go on to become contributors. And really find and encourage the right partners and persistence in that advocacy kind of pays off when partners really understand how they benefit from open source, which is a difficult sell sometimes. And then also invest in community infrastructure and process. Like UCI really amplifies people's works. The investment shouldn't be only in code, but also in infrastructure and documentation. And then like I said, hold events, get people to attend, face-to-face meetings really build like long-lasting relationships that survive when you go back and trust that gets built when you go back. And that's why we're all here this week. And then again, the initial investment can be a funding signal that provides an indication of an organization's commitment to other potential partners. And then to expand the range of users, really invest in user experience. I know a lot of us just don't, in open source, okay, that's it. You said you could have 15 minutes. All right. That was 20? Okay. That was 22. All right. Well, I'm sorry. That's all right? Any questions? Yeah. I'll just leave those up there. You can read them. I'm sorry. I'm sorry. That's what I was told. That's your job. Go ahead and ask some questions. I'll give them a call. Thank you. It's really an amazing story. Questions? I'll go with you first and then Eddie. Just thinking way, way back to when it first started, was it one person's idea? In essence, yeah, two people. Chris Holmes, who was at OpenGeo and Stu Gill at the bank, I think they had some beers and realized that there just wasn't something that fit this need in the open source ecosystem. Obviously, OpenGeo had done a geo server and worked on posters and things like that, but they didn't really have a front end user facing where people could just, I mean, like web services only get you so far in the front. You have to put something on it so people can discover and that's what we're saying. Eddie, what was your question? Can you tell us a little bit more about the commercial ecosystem that now surrounds? I mean, I can go back, but I think this is, I don't think Carl has the whole list here, but I mean, this is the list of companies that provide commercial support, right? My company's on there. All these companies either do core development or bug fixes or a lot of them just stand it up to support it, help people load their data to support it. I mean, many of these are companies outside the US. But I mean, I can think of companies like AgriSoft, right? I went and gave a training for them for the bank and two years later, I talked to those guys and they've gone on to use it all over and all these different projects all over the place. And the bank really made that initial push to train these people up, get engaged local firms and then it's paid off tremendously because they just go on. They're like, oh, this is cool. We'll go on to use it in all of our projects. Yeah, you have everybody from Geosolutions and Cardo's a Tim Sutton's company and it's just quite a range of commercial providers. And I think that's, it's pretty easy for people to find support and help and get their stuff up and going. So any other questions? Yep, go ahead. Can you estimate how many instances of journal? It's really difficult. I mean, it's certainly in the hundreds. It's a lot of, there's quite a number of them that aren't public or aren't online or we can't find. But yeah, there's quite a few now. Okay, that's good. Thanks. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you. Thanks. Thanks. Thanks. Thanks. You. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thank you. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks.
The GeoNode project has grown from an idea and a handful of early partners 5 years ago to a large and thriving open source project and downstream ecosystem. This talk will discuss the cast of characters and organizations that currently contribute to GeoNode, how this community has grown and evolved over time and the growing pains encountered and lessons learned in the process. Particular focus will be paid to the technical and collaborative aspects of growing and managing a diverse community, looking at how new community members are brought into the fold and how the resources that organizations with different needs and requirements bring to the table are marshaled most effectively to achieve economies of scale when developing new features. The GeoNode community has begun a quantitative analysis of organizational return on investment from open source and initial results of this study will also be presented.
10.5446/20433 (DOI)
My name is Johan and I'm going to talk about OSRM, the open source routing machine, and the two previous talks we were already mentioned quite well. So I think you're also all really tuned into routing so we can dive into it. Before that I'm from a company called Mapbox. We are a platform of building blocks for geo applications. We do maps and geo coding and directions and satellite imagery, all of these things you need to build your own applications and websites. We are mostly built on open data. We try to use open data where we can. We try to open data sets and make them available to the public. An example here is Open Addresses, which is one of the alliances that we try to build around having geo coding data available for the public. And we also built heavily on open source. For example, we support MapNake and MapRenderer. We have a client-side rendering, vector-tile rendering engine, Mapbox GL native that Vlad is going to talk about later today. And we have the routing machine OSRAM. So OSRAM is shortest path routing engine. It's very similar to PG routing that you just heard about. You can set two points and get a route between that. So here's an example. I start in Berlin where I live and I go around Europe. The one thing you should notice here right away is it's freaking fast. OSRAM was built with performance at the heart. It's really important that you get the route immediately. This only shows you have one start point and you have one end point. OSRAM can handle way more waypoints in between, but today I'm only going to focus on this case. So, yeah, performance is really important if you want to do routing. So if you want to use OSRAM to achieve this performance, you cannot just take the route network and route on it, but you have to do some work before. So very different to PG routing where you load your data into the database, but then nothing happens on it. You route on basically the route network right away. We have to do some processing before that. Our OSRAM loads OSM data in. So it's built with OSM in mind. If you have other road graphs, you can load them in there as well. You just have to bring them into the OSM schema. So it can be an XML file or a Pbf file. You have to write a transformation yourself. The processing is a step where we basically take the OSM data and we filter it and then we use an algorithm called contraction hierarchies to filter out some of the parts of the network, but also create shortcuts so that we don't have to look at the whole network when we do routing requests. So, for example, if you want to go from Bonn to Berlin, we're not going to look at all the roads in between, but there will be shortcuts that we can use to not having to look at the whole roads network. This achieves this super fast query time, but the pre-processing takes a long time. At Mapbox, we're working usually with the whole OSM planet. So we take the whole planet file that OSM creates regularly and we do the processing on that one. So we're taking the R3 and Xlarge instances from EC2 that are really beefy. It's some of the biggest instances you can buy. They're really expensive as well. I think they're $2 per hour, but I have 32 CPUs and I have 240 gigabytes of RAM, which is quite insane. It takes nine hours for car to process, takes 13 hours for bike, it takes 18 hours for walk. One thing you notice here, we can not only do car routing, we can also do other kinds of routing. Actually, a view setup, so these are the three profiles that we as Mapbox define for us are important. There's other people who do different kinds of routing. You can do truck routing. I always joke about segue routing. I think nobody actually built it so far because curb information is not really there in OpenStreetMap. WheelMap, which is wheelchair accessible OpenStreetMap, tries to change that and then we can also do segue routing way better. But yeah, you can customize these profiles the way you want for your specific use case. We have these three use cases at the moment that we support. Keep in mind, processing takes a long time, but we do this because we want to be really freaking fast. This pulled these numbers out yesterday. This is the average processing time for routes on the Mapbox OSRM instances for car routing on the planet. There's some spikes there because sometimes people have like, sometimes people pound us with really expensive 25 waypoint queries and these take like several hundred milliseconds, but usually a two coordinate query we get in 10 milliseconds, 20 milliseconds. It is really fast. We're running this on our three forex large instances. So these have 122 gigabytes of RAM. Currently for the OSRM planet, we need 70 gigabytes of RAM, which is quite annoying because it just kind of tipped over the 60, the 61 gigabytes of RAM for the lower instance size. So we had to like double our instance size and our cost and everyone was really sad for a day. You don't have to do this on the planet. There's like basically proportional less area means less RAM, but you will still get the same speed if you have enough CPU available. So routing really fast. That was basically like the status quo when we were a year ago and a year ago we set out to, okay, what is the next thing in OSRM that we want to achieve? It does fast routing. It works quite well. And we set out to improve the incarnation experience. So what I wanted to do is OSRM, I wanted to make it possible for OSRM to work in a car. I'm like kind of pounding on the car case here all the time, but this is again, this could work for bike. Actually, I'm doing mostly bike, bike riding in Berlin. So I'm like mostly using this as navigation for bike. But you can also use it for hiking navigation. All of these things are in there. Car is just like the case that is the most entering one from a business perspective for us. In car navigation, there's two things. Oh yeah, this is how it should look. This is like basically kind of the goal. This, what we want to achieve is we want to give the clients full control over how routing results are shown in the, on the client for the user. So OSRM generates a huge payload with all the information in a route response that you need to basically get this, this experience out. The maps, by the way, is all the like part that comes from another part of the stack of the vector tiles and that are rendered on device. So that's like another part. The important thing is here, like there's turn signals, there's names, there is like distances to the next intersection. All of these things need to be in the routing response. The client then can put this together and we don't care who the client is in a way. This can be your application. This can be your mobile app. This can be in a car. This can run in your, like on your bike. Whatever the device is, we want to be able to support it as long as it can do requests to a server. Like it needs to be connected. Whoever is going to ask about offline support afterwards, it's going to get a good answer. So yeah, there were two things we need to figure out for in car. First thing is we have to improve guidance. And the second thing is we have to add dynamic speeds, which you already heard a lot about today. So guidance, the first thing. One thing that we set out to do is we figured out, oh, thinking about guidance is actually really hard. So we started building the bug tools for basically doing virtual test drives. We're doing test drives ourselves as well. So mapbox internally, we have SDKs that are working with OSRM responses that you can run on your phone. Parts of that, we try to make available SDKs so you can use them as well in your app. But we'll likely never release an end user-facing app just because that's not our business. We're not in the business of end user apps. We're in the business of building blocks for applications. So we built this tool that runs in the browser. This tool actually is open source. It's called direction simulator. So you can run it yourself. If you have an OSRM instance, it will work with this. And it basically drives the route that you get back from OSRM. A route you have to think about as an array of steps. So every time the user has, or every time the driver has to make a decision, like turn left or get on this ramp or get off this ramp, every time there is an entry in this array that says here's the instruction that the user should get. Again, we bring the responsibility of how this instruction is interpreted to the client. So the client should decide, I want to have this as a voice instruction, or I need the steering wheel to flash, or I need the person's smartwatch to write rate, or whatever the decision is of how the instruction is delivered to the user is on the client side. We want to give you the tools to make that possible. On the right side here, you see kind of a, the current step that we're in, or actually the next step that's coming up. So we just switched. Here you can see there's a type. It says end of road here. So we're coming to the end of road instruction now. This is the end of the road. We have to go to the right. So there's a modifier right. And then it switches to the next step. And there's like a bunch more information in there, this intersection information, lane information, and so on. I think at some point in the video, yeah, I scroll down again so you can see it again. There's like a type that says you should continue now and you should not like take a turn. Now there is another instruction that says you should take a turn, so on and so forth. When we set out with building this, we realized that we don't have enough expression in the current types that we have as these step instructions. So on the left is kind of the status quo. On the right is all the things we added in the latest OSRM version. So one of the most important things is motorways or the Autobahn where we basically didn't have good handling for getting on an Autobahn, getting off it. We only said continue and that really doesn't help you because you don't want to continue on an Autobahn when you go off. And we had to work on roadways and roundabouts for a long time to make them nice. Anyone is searching for a lot of frustration, try roundabouts and routing on them. This is an example. Can I do this? Great. The old case was this is getting off the ramp in an Autobahn and it just says continue here, which of course is really bad. In the new version it says take the ramp on the right. And actually we also implemented destination signs now. So this client does not parse the destination signs correctly, but it should have said take the exit towards Cologne on the right. Another example roundabouts, I already talked about them. We're now counting exits. So back in the days we entered a roundabout and took the exit on to Castanenalli, whereas now we're doing enter Brandenzaplatz and take the second exit on to Castanenalli so we can count these exits. The kind of the sad part or the annoying part about everything that is guidance is that you have to tinker all the time. There's a devil in the details. There's all these small things where it is also potentially not objectively good opinion, but there's different opinions for everyone. What we try at the moment is to say we don't actually know how the user should interpret this and how the client should handle this. So we give the responsibility to the client and we currently build internal libraries that try to parse this information. Some of this is open source already. Some of it is not. We try to figure out what's the right ways there. It's hard. But we don't want to make these decisions. We want to give them to the clients. Last thing with guidance is we combined instructions. So on the left side you can see that there's two left instructions that says turn left and then in 15 meters turn left again, whereas actually this is a U-turn. So on the right side you can see that it now says make a U-turn. This is the last thing, lanes, lane information. So in OSM you can tag lane information by saying using the turn lanes tag and then as a value you say it should be left, through, or right, or a U-turn. And you have a pipe to separate lanes and this information can then be made into these nice icons on the client. This can also be used for voice instructions. So to say take the third lane on the left, that's possible as well. Okay. Guides. Next thing, dynamic speeds. We want to be able to give realistic ETAs. How long will it take to get to your destination? If there is a better route we want to show you the better route. We want to give you the better route. So if there is a traffic jam or if there's always rush hour on this one segment you should be able to route around it. One thing we always try to build is good debug tools and one thing we build is maybe this joke in GEO that eventually everything becomes a tile server and like half a year ago OSRM became a tile server and now serves the bug tiles. And here you can see a segment, it has Lua profile with a speed of 55 kilometers per hour. So this is the thing, this is kind of the old state with non-dynamic speeds. You assign one value to a speed segment, to a street segment, but there's no like local knowledge around this. How works is either the max speed is already tagged in OSRM. If it's not tagged then we need to just define what it is. And for example, the big street outside of the conference center is tagged as primary and in OSRM we say well every primary street should have 65 kilometers per hour. We're actually in a city here so that won't work. Luckily in OSRM this is tagged as max speed 50 which is cool, but we won't have information like rush hour in there or traffic congestions. How do we get the speeds in? So we can load external CSV files that are in the format of from node, to node and then the speed on the segment. So we define segments through nodes. There's three types of speeds that you can kind of work with. Actually there's four. I just realized now the zero one is posted speed limit, but that really doesn't mean much because that's not how people drive. First one is free flow. What happens if there's no congestion on the street? If you can just go as fast as possible as you want. And hopefully if we're all law abiding citizens then this will be the posted speed limit. But I definitely know some roads where you cannot hit the speed limit because they're so windy in mountainous areas. Also there's streets that are just always congested. Thinking about bigger metropolis metropolitan areas of this world, even at night there's like traffic jams. So there's never like a real, you can never drive the speed limit. If you want to go more fine grained then you can do kind of the free flow speed or the speed in a certain time bucket. So we call these historical buckets in a typical week. What is the speed for this 15 minute window or five minute window or whatever you think your window should be? The last one is real time. So what is the speed right now? If there's an incident, if there's a sports event, if there's weather, I think when we think about speeds then we mostly think about real time. That's also the one that is the hardest. The one that seems for us to have the most value at the moment is actually the historical buckets because rush hours tend to be fairly predictable when they happen. So again this picture of the debug map on the left you can see kind of the default, the speed defined by the profile. On the right you can see the speed that is now coming from one of our speed sources and it's actually 24 kilometers per hour and not 55 because there's congestion on the street. For the debug tools that we built we have one that constantly queries our own internal instances of OSRM and looks at like what's the ETA right now? So this is going from the Madbox San Francisco office to the airport and the blue line that flat lines is no traffic information. The orange line is with traffic information and you can kind of really well see there's a rush hour going on in the morning and then it stops when the rush hour is over and when I took this picture yesterday evening the rush hour didn't start again for going home. We have another tool that we constantly query routes. We also save the geometries of the routes and here you can see how the routes actually change. So there's one or in San Francisco there's basically two branches of going to the airport from our office and depending on the time of day you go there's a different branch you want to take. So how do you get these speeds into OSRM? We have to make processing faster. We always said it takes eight hours to process car and we can only update on processing so that's not fast enough. So what we do is the first thing is we have to cache and we cache major parts of the processing and only apply the speeds at the very last point and that's where we apply the speeds. And the second thing is we do less work. We contract less of the graph but there's a tradeoff between the processing at the query time then. We found that 0.8 as a factor is working pretty well, maybe 0.7 is also okay. And the last thing is smaller geographical areas. So to finish this up, we actually got down to turnaround times to ten minutes by doing only North America which is good enough for us. This was three hours before so the eight hours were for the planet. It's three hours for North America. It's ten minutes if you do all the caching in between and do less work. The route request times increased from 20 milliseconds to around 200 milliseconds which for in-car navigation with two rate points is totally okay. Okay. Yeah. Thanks for your memory, in-car navigation ready and bring your own speeds. I have one problem that I faced a lot this summer. I'm pulling a trailer behind my car so I have a speed restriction on highways that I can't go faster than 80 kilometers now even if it's 90 or 110. Have you thought about that problem? Yeah. So what you want is like even more dynamic speeds. The way how I would at the moment advise to solve that problem is set up an OSRM instance with a profile that says never to go higher than the speed which is possible right now but which is total crap because it's way too expensive to do for every case. We can currently not care well for these cases where the user defines constraints at query time. PG routing totally is better on that. Have you ever thought about updating the average speeds in the OSRM base data set from your speeds because you obviously have way more resources to collect and update the speeds in the OSRM speeds at the moment. Not good. Yeah. We have thought about it and talking with the OSRM community about mechanical edits is like its own thing. We would like to do that. On the other hand we also are not confident enough yet in our data. So that's totally something that we would like to do. The thing that could happen before as we open up this data set before it's actually applied to OSRM and users can apply to themselves but we have not. This is like me talking and not Mapbox talking. Hello. I'm from Carto which is formerly known as CartoDB. And one thing what we have done is you can check this point in your paper. One thing what we have done is optimized a little bit this OSRM so it works also on mobile so you don't need like 100 gigs but just couple like some megabytes and it works across Europe also like 200 milliseconds or something. So that's doable. So I don't ask about offline workings. We have it already. But I wanted to ask about the Zincar navigation. This route calculation is just one part of the job. You also need to follow the GPS, see what is actual direction. Is it kind of deviating from that route to do something for these additional things also? Yeah. So this work currently happens in two Mapbox repositories that are called navigation JS and the Mapbox guidance. I'm not sure. There's implementations for Android and for iOS. I'm not sure how the repository is called. Sorry. But these are SDKs that work with the OSRM responses as well. The problem is that we're still finding out ourselves how this actually all works. Like what is good client implementations for this which is why we for now draw the line and said clients have to think about this or the clients have to implement this because things like when do you reroute can only the client decide. And we need to make sure that OSRM delivers responses that have a lot of flexibility there. Any more questions? Thank you. That's a great talk. I have a question around. So it seems like there's a trade off or compromise between essentially pre-computing all the contraction hierarchies or alternatively waiting longer for the routing algorithm to run, right? Yeah, exactly. So is there do you foresee like do you think you've like hit the limit of the like do you think there's efficiencies to be made there? Do you think you could find ways to improve the construction of the contraction hierarchies or do you think you've like that's maxed out and you can't get that any faster now? If there was easy ways we would have already done it. Well, I would never exclude it. There's definitely more like speed ups that can be had just not necessarily on the algorithmic level but on the just programming level and parallelization level and yeah looking at less data at certain points definitely possible. It's always a question of like order of magnitude of speed up. Like if we shave off 10% then like we could also not do it. So, one more question. Can you describe a little bit the process of collecting all the real time data to manage to have a work of rage of the real speed that people are doing in a 15 minute window? So, we, Mapbox collects telemetry from its mobile SDKs. So, applications that implement Mapbox in applications send anonymized telemetry back to our servers. We're currently collecting 80 million miles per day. So, that's a lot of trace data coming in and this data we process match to OSM, figure out which road segment is this probe on, figure out modality. So, is this actually a car and then from then can the ride speeds? So, thank you. So, let's give a round of applause to the presenters. We're very good presentation. One last thing. Yeah, we're here until Friday. Talk to us. Thank you.
The Open Source Routing Machine (OSRM) is a routing engine, providing blazing fast route-finding on global data sets like OpenStreetMap. With Version 5 of OSRM we tackled two challenges: providing a world-class navigation experience for car drivers and making OSRM easier to work with for developers. To deliver great navigation, we made route duration estimates more realistic, by allowing developers to provide custom speed and turn duration data. We also dramatically shortened pre-processing times and improved turn-by-turn guidance. To deliver a great experience to developers we modernized the code base and improved the build and test systems. We also refactored the HTTP API to support the new features and removed historical short-comings. In this talk we will introduce the subject of routing in general and then explain the new features of OSRM Version 5 in detail. We will highlight the trade-offs we faced and the reasoning behind our decisions.
10.5446/20431 (DOI)
Thanks very much. We'll appreciate everyone's patience as we work through some logistical issues. My name is Frank Pachel with Cadastra Foundation, where we're focused on property rights and how you might document the rights of those people left out of formal systems. So with that, I'm going to dive right in. And I recognize that most of you in the audience probably take property rights for granted. You know that the land you own is something the government will protect, that you can claim your right and be free to trade your property or engage in market activities. But let's take a step back and imagine that we don't all or many of us live in Europe or North America where these functioning land administration systems exist and pretend we're a resident of an informal urban slum. And this isn't an unreal proposition when you consider the fact that by the year 2020 there'll be an estimated, I think, 1.4 billion slum dwellers representing one in seven people on the face of the earth. And for most of these urban slum dwellers, property rights are not something they enjoy. So what would you do if you didn't have property rights? How would that affect your day-to-day life? So I'm just going to mention a couple of the ways this might affect us all. The first and the one that is often, most often mentioned, is the whole idea of access to credit. If you don't have a formal title, deed, lease, rental agreement, how can you leverage this asset that for many people is the most valuable asset they own. So the theory is that with a title you can buy, sell, trade. Now the reality for an urban slum or even a low-value rural property is that banks really aren't interested in collateralizing property worth a couple of thousand dollars. The cost of eviction, the cost of trying to repossess the property doesn't really make it worth it. But that said, it does provide lending institutions the confidence that you do in fact own the property, that they know where to find you in the event you default on the loan, and maybe they'll just hold on to that title or deed so you can't buy or sell or trade the property while you have a mortgage against it. Another thought is your agricultural and your business decisions. So less, of course, less relevant for the urban context, but in a rural context. Are you going to invest in irrigation? Are you going to invest in long-term, higher value crops like hardwoods? Or are you going to focus on short-term gain? And to those small businesses, are they going to invest in improving the property? Probably not. Let's think about home improvement decisions. If you're not confident the property will be yours next week, are you going to spend money on a new roof, improving a chimney, or putting down a concrete floor? Again, probably not. And it's important to realize all three of those things directly contribute to health and well-being. Now the threat of eviction. Again, without that confidence in your right to occupy the property, you're always in fear that the government might come along and evict you, that someone else might come forward with a claim to the property. And that, again, affects your day-to-day life. Are you going to leave the property unoccupied? Probably not. And that means that if you say are a two-breadwinner family, someone has to remain on the property. So one of the kids might not be getting an education because they need to be there to protect the property and potentially move things when the government comes through with evictions. Land conflicts. In emerging economies, it's not at all unusual to find that 50 to 70 percent of court cases are around land conflict. With no registry to refer to, no real property rights records, who usually wins those conflicts, those with the money to really fight them out in the courts, and those with connections? And that's to say nothing of the macroeconomic effects. You'd be hard-pressed to think of a major global conflict that didn't have a land rights component to it. Finally, I'll get to infrastructure upgrades. If you're a slum dweller without confidence that the property is yours, you're in a pretty weak position to advocate for improving the sanitation, the electricity, the water, the access roads. And if you do advocate for them and they put those in, guess what? There's probably more evictions. And government compensation in emerging economies is rare at best. And finally, your identity. If you don't have a formal property that's registered that is in your name, you probably don't have a mailing address. You probably don't have a real sense of ownership, of belonging. So property rights is really a cross-cutting issue. And when you think about it in that context, it's going to affect development writ large. Land really is a multi-dimensional resource. It's not just where you live, but it's also a means of your production, your basis of livelihoods, how you're making your money. It can mean different things to different people. But again, when you think in the development context, how can an agricultural project or an infrastructure upgrading project or a finance project really go forward without looking at the property rights issue? With all of that, you might ask yourself, why haven't there been more land problems to fix this? The World Bank's active, your bilateral and multilateral donors are spending billions each year. Where's it going? And yes, there are some property rights projects. In fact, well over a billion dollars has been spent in the past five or six years, if I remember correctly. But that said, the success and the track record for success is relatively poor. Ask yourself, why? Well, from our perspective at Cadasta, a big part of it is they're going from the traditional top-down approach. There's a lack of land specialists across the globe. If you look at a country like Cote d'Ivoire, I believe it has 25 to 30 registered surveyors, and it takes a registered surveyor to do the cadastral plan that's a requirement for your formal title, and compare that to Austria with 300 licensed surveyors at half the population, one third of the area, and an infrastructure that makes it easy to get around the country. In Cote d'Ivoire, you're going to find the surveyors clustered in three cities and to get anywhere as a two-day drive, or to get to parts of the country. So that's compounded by the fact that the solutions for documenting land information are often based on an urban colonial elite. So it might be a legacy leftover from a colonial period where your requirements for accuracy make sense for that high-value urban area, but do they really make sense for a rural area where the property value is much lower? And when you think about the cost to register property, there's the formal and informal costs, but getting that survey and survey out to your rural property probably doesn't cost that much more or less than it does here in Europe. So you're talking about a cost that's out of the reach for many of your smallholder farmers. And finally, when you think about big land registry cadastral systems that cost millions of dollars to put in, in all honesty, the track record of success for those is pretty poor. They go in, they work for a few years, but then they start running into issues of, ah, the server's down, we need to buy a new server. Well, the government hasn't budgeted for new hardware. You might find that the staff you've trained are now all well-trained IT specialists. Well, they're going to go to the private sector. So who's going to fund training of new staff? Who's going to pay the maintenance of that software, whether it's a proprietary system or, you know, paying for specialists for your open source software? So the end result of this is a situation where a vast majority of the world's population lacks formal property rights. You see numbers ranging from 70 to 90% of the globe is undocumented. Let's just say 70%. It's clear the traditional approaches aren't working. So recognizing that government and the existing land professionals can't keep up with this demand, how might we address it? There's a couple of approaches that we think about at Cadesta. One is let's use appropriate technology and innovative approaches. Let's not rely on the traditional survey methodology. Let's look at using a lower, you know, lower accuracy but lower cost GPS. Let's think about using satellite imagery. Let's think about using the smartphones that send a lot of people's pockets. This might not get you the same level of accuracy that a traditional survey or would, but it's pretty good. And sometimes what's the same perfect as the enemy of good? And when you've got a 70 to 80% of the Earth's surface undocumented, let's just get to good first. We can perfect later. And finally, let's think about flexible software. So software that's able to support unconventional approaches that's accessible, maybe that's cloud based. Let's think about an open source tool. And let's think about open data. So at Cadesta, our focus is really looking at how we can document the rights of those people left out of the formal system and post that data on a secure platform that is accessible and can be managed by those end users. And this really kind of comes back to my experience working with the formal sector and putting in those top down government systems and seeing that they aren't always proving to be sustainable. So let's flip that model on its head and work from the bottom up. So with the Cadesta platform, you know, we looked at a lot of the standards in land administration. There's the very formal land administration domain model. There's a subset the social tenure domain model that's much more relevant for us. But then we also started talking to partners and realizing that a lot of them were collecting property rights data, but with along with huge with other data sets. So they might have been interested in household surveys and slum upgrading. So property rights as a component, but they needed the flexibility to add in other data sets that they're collecting. So that could be a various types of rights, not just your ownership, but your lease holds, your customary, your grazing rights, your hunting rights. And how is that data being documented? It could be interviews, could be videos, pictures, all of this can go up to the platform and adding to that body of evidence regarding your property rights. And again, you know, we spent a lot of time speaking to partners when we were designing the platform. So one of the big things was flexibility and how their data collection is done. But also they kept saying we need satellite imagery, or you know, we're using Google Earth and posting our data up. Is that okay? To which, well, depends on your degree of sensitivity on posting that data to Google. And if you want satellite imagery, they'd say it's so complex, they're asking me questions about ortho rectifications and whether it's ortho rectified and we don't know what to do. And they told us we could access it, but we've got to go to an FTP site to download it and we can't do that from Burkina. And by the way, we don't have a credit card. How are we going to get this imagery? So it really revealed itself to be core for a lot of our partners. So with our platform, we stream imagery using the digital globe API. So partners in the field can use that imagery directly. And for some, that's their first step in documenting property rights. With half meter or one meter, two meter imagery, you're at least able to get general boundaries that again can be improved upon. It's an incremental process. With the cadastral platform, it's also important for people to manage their own data. We don't want to be the ones involved with every implementation. Our users can set their own access rights, who can do what with the system, what data is accessible and viewable by the world at large, what is not, and really manage their own security settings. For us, that was a bit of a challenge because we're very much for open data and land information, but we recognize that for many people, putting this data in the public domain can actually put them at risk. So it's up to users to set the level of openness. And where does cadastra fit in? Really, it's about two things. One, providing the technology to document their rights, but two, providing open data sets where they do exist on land information. And while that in many places might be your title, your deed, your cadastral layer, it could also be things like the concession layer. We did an analysis of concessions in, I want to say it was in the Philippines last year and found that only about 50% could really be defined where the concession actually was. And within those 50%, there were overlaps, there were discrepancies, there were gaps, some concessions were completely in the water, which didn't make sense for the type of concession it was. So if specialists can't decode these concession agreements, what's a rural inhabitance supposed to do when they find out that bulldozers are moving into their property and saying they've got a concession for mining or for palm oil or whatever it might be? So displaying that open data is quite important for us. So we seem to have lost some words there, but in any case, here we are at the cadastra platform. And our focus is really not on the tools to bring in the data, whether that's mobile applications, whether it's using drones, whether it's traditional systems, whether it's paper documents. Our focus is really on working with the partners and their beneficiaries and getting the data to the platform. And part of our thinking behind this was recognizing that they're good tools out there for data collection. We don't want to spend time and money on new technology when they're good solutions out there. I'll skip through this given the time, but one of the first I'll mention is Open Data Kit and GeoOpen Data Kit. Great data collection application has, I think, 100,000 downloads when I checked a couple of months ago. And then GeoODK, which kind of adds a bit more of a spatial component. It already has an open source community around it. It doesn't require us to build one, but it fits our need for that data collection. And you can sync directly your data collected in ODK or GeoODK and post it directly into the platform. Another one we work with is field papers. I don't know if anyone's worked with field papers here. It's kind of common within the OpenStreetMap community, but it's a paper-based approach to documenting rights. You print out a map, you go out in the field, you take notes on top of it, you take a picture. And what's interesting is that QR barcode in the corner of that paper map GeoReferences. So you bring it right back into the platform and you can sketch on top of it. And now you've got some boundaries, really embracing that fit-for-purpose model of using technology that's paper-based, cheap, accessible. And finally, you could use a handheld GPS combined with a paper questionnaire, digitized boundaries using the satellite imagery if you prefer. So over the past couple of months we've had a couple of interesting programs documenting rights in Kosovo using GeoODK and drone imagery. It was really nice looking at two centimeter imagery sitting with the property inhabitants and really being able to pick out corner points and sped up the traditional process of using your survey technology. Working with Nomati, we've collaborated with a couple of groups in Kenya. It's given us some different use cases in terms of customary rights, past oralist groups, and a range of data collection, whether sketching on imagery using field papers or using GeoODK. Over the coming months we'll be building out our partner pipeline, improving the platform. The next big one is a QGIS plugin and finally adding custom base maps, data imports, and improving our documentation. So we can really more directly interface with those other applications out there for data collection. Thank you. Thank you very much. Are there any questions? Questions? Who's the first one? Nobody? Okay. What is a QGIS plugin? What will it do that you are talking about? So one of the main things is our outputs. So producing community maps and then being able to manipulate them, just given a higher degree of a more robust tool set within QGIS. So being able to take the data that's in the platform directly into your QGIS working environment. Thank you. Documenting the land rights is just the first step. How do you do the secure man of the land rights? Have you at all looked at the blockchain technology in that context? Sure. We spent quite a bit of time looking at blockchain actually. And while I come down on the side of thinking it's probably appropriate and a more secure or a more advanced land information system where that incremental greater degree of security really matters. In places where documenting rights in the first instance is the biggest priority, it's less of importance to us. That said, I could see where hashing basic aspects of data, of when it's recorded might make sense. So at this point, we aren't integrating with blockchain, but it's something we're certainly considering for the future. No further questions? Then thank you very much for the presentation. We are just in time. Perfect.
Over 70 percent of land in Sub-Sahara Africa is not documented or included in the formal land administration systems, current requirements of land information systems has created a hindrance rather than facilitate security of land tenure. Cadasta Foundation is aiming to build “fit-for-purpose” land tools that focus on making it possible for communities, governments and non-governmental institutions to document land tenure rights, without the rigid requirement imposed by current land information systems, land tenure documenting procedures and physical boundary accuracy. “Fit-for-purpose” terminology was coined at a world bank conference, where stakeholders realized the need to come up with different approach when developing land administration systems. Cadasta platform is an open source project built on top of django. This fits well, on the requirements of a land information system that is flexibility, affordable and attainable for recording land rights. Cadasta platform extensive API and functionality allows it to be connected to GeoODK, which is essentially ODK with added functionality for mapping and spatial features. This makes GeoODK an ideal tool for participatory data collection, something that has been advocated for in “fit-for-purpose” approach to recording land rights information.
10.5446/20430 (DOI)
Hello. Good morning on the Thursday. Yeah, you probably had another session before this one. This is my first. So, yeah, there's always a, when you look at the program, it's kind of, you kind of puzzle when how did the organization make the, put the different presentations together. Well, this one is clearly around the OGC standards. So, yeah, this morning we have three presentations about OGC related standards. First one is Athena, Long History in OGC. And I think no need to introduce you. Good luck. Thank you. Welcome everyone to the session. Yeah, my name is Athena Dracas. I work for OGC and I'm OGC's representative in Europe. So, whenever you have questions. And I brought a co-speaker. That's Dirk Stegger from Latteland. And so what we are doing, I do a first more theoretical introduction and then we go practical with Dirk. And I have to say, so compliance doesn't, that's my colleague, Luis Bermules. So, he is our compliance person. And actually, he provided most of the input for my slides. So, just one slide about the OGC. What do we do? We, that is our members, they develop open standards and associated standards, best practices. And the OGC as an entire organization, we want to serve for, as a global forum for the communication and collaboration of users, software developers, scientists in the geospatial domain. What we are also doing, we provide an agile and innovative development. Actually, we are bringing people together so they can test their software. If they can, against a real world scenario, they can use the standards and the data. And then this all goes back into the various working groups and the information is shared. And so, we try to improve. We are over 525 member organizations worldwide. And we have an internal portal for the members and that's nearly 7,000 users. And our members have developed more than 60 standards. Most of, or many of them are also ISO standards. Because one aspect we are doing, we collaborate with a lot of other standards organizations or general organizations in the domain, also with OSGEO. Today, I want to talk about the compliance testing. And the compliance programs deals actually with various activities. And the key topic is, of course, it oversees the compliance test activities with the standards working groups and software developers to improve the testing tools and the tests of the standards. And the compliance program, and in person, Luis Bermunes, he is managing the process for certification of software product. And what does this mean? I will try to explain in the next few slides. Why do we need reference implementations? So, we have standards, there is software, but people might want to look, they need more information about, you know, does the standard really does this job? And so, there are three ideas or there's a process. First of all, we need to validate implementations. That is, we test and prove that the software does what it has to do according to the definitions in the standard. Then we need a validator for this. That's a machine that tests the software that does the job of the testing. And then we need implementations to verify that the test makes sense. And this is an iterative process, but at one point we stopped with that. So, we had at Phosphor G, CEE, Central European Central Europe, Central and Eastern Europe 2014, there was a benchmark session that was in Bremen on the WPS. And the results, so you can see various projects that are implementing the WPS standard assigned individuals from their community to participate in this benchmark. And you see Gerard Verneuil mentioned, he is here and 5 to N, they have a booth there. So if you want to learn more details about that, you can visit these people or approach the people. But stuff like this is really important and really valuable for OGC because we get feedback on the standards, but it's also very valuable for OSG for the open source software community. So, we need to have a validator that tests and tools that run the test. So before we can go to the certification program or to the certification of the software, we have to test, they have to define a test and check if the test is right. So there are various testing tools, there's tools, communities and support. We use team engine and as you can see, team engine is on GitHub, so it's open source software. There are code contributors and the second one is Luis Bermudez, he has contributed the most code, but there are more people that participate to the code of the test engine. Then we have a public forum for support and of course, communities or communication facilities. The idea is that if you want to see what the test does, you can download the software, the code from GitHub, but you can also go directly to the OCC website, so you don't have to store it locally on your computer. Once the test has been approved and is fixed, then and you want to go for OCC certification, you have to do this on the OCC website, that's a service, but on the one side it's a service, but if you want to have an OCC certificate, you have to do it on the website, but it's transparent, it's open source, so you can also download the source code. Anybody can run the test and see if it does the work correctly, so this means not only members of the OCC can do that, but also non-members and you register and you don't have to install it on your computer, you can just use the OCC website. There's a selection of tests, we have various tests, not only one, we have, I don't know how many exactly, so you select with standards you want to test and then the test suite, you enter a lot of some additional information and then you can start the new test session. The team engine will provide various windows where you can select options, so it's very step by step, it's nothing, it's not rocket science, it's nothing complicated, the test guides you through step by step through the entire process and then there comes a result and here you can see that they have tested the WMS 1.1.1 test suite and there are two processes that failed and if you can click here on the view details to see why this failed and then you can identify is the test not mature enough, do we have to work on the test, we're still in the phase of testing the test for the standard or you can see if the software is not correct and then of course you would provide feedback to Lewis or the group so they can of course improve either the test or you can look if it's in your software. We have various types of tests, encodings like sensor ML, KML, GML, metadata and XML and more, then servers, there are many tests out there and by the way all this you can read this in more detail on the OTC website, there is a compliance test website and we have one client test for WMS 1.3. So now we have identified, well we have tested and validated the test and have said okay now the test is fixed, secure, it's mature enough, now we need implementations to verify that the test makes sense. But that's, we have a process on OTC so if you say okay the test is mature enough, we go through a formal process in the OTC and well I don't go through all the details here but the members, they approve that the test is okay and once it's approved by the members we can go for the next step and start with the reference implementations. Here another example, the catalog services standard, we have two reference implementations that's PYCSW and I saw Angelos Sotsas is here and maybe later in the question answering session because he has implemented the reference implementation with a group for the catalog service and as for geoportal service another reference implementation and the reference implementation for an OTC standard has to be freely and publicly available for testing via a web service or download so this is a precondition, this really has to be like that. Okay so how can you use now the test once it has been approved and in the process a test for a special standard? Well if you're a software developer you can get OTC certified and we certify the implementations then you get a badge with your name on it which tests you have taken and you can use this for your website. Basically we have 851 implementations and 203 compliant products, this means that they are 851 implementations that say we are implemented as special standard but they have not taken the compliance test and 203 products has taken the compliance test so they can use the logo and say okay we are OTC compliant. There might be some analogies with the OSGEO incorporation process but the OTC process is purely technical. Yes? So why is this difference? Okay, for some it's okay they say we have implemented it, we don't want to do it to the very detail. It has to be said that also if you take the certification and you want to use the logo you have to pay a license fee for that so your software might be 100% implementing the standard but if you say okay I don't want to pay the fee for that then you don't go for this certification. So for users, if you want to show that your implementation really works you can take the test and then you can use that for marketing material or for tenders you can say okay see I have run the test on the OTC website so this might help you to get more business to have a well be a step ahead of some of your competitors and for users it might be also interesting if they are looking for software and don't know exactly what software should I take they can go on the OTC website and find the entire list of implementing products and certified products. So if you want, if you need for your architecture a special standard and want to use software please have a look on our website because there you can find a lot of information, a lot of products that are implementing or even have been certified as OTC compliant. For implementers it's also interesting you know if you have the logo you can say I'm OTC compliant and there is no doubt that your software does not implement the standard so it's very straightforward. You can promote it in your products on your website so that's the marketing part of the work. And here this is kind of a summary how it works again actually it's more or less you know you take the test, the approved test now that is secure that is mature, you take the test you go through the entire process and then it comes if there's no fail coming back you can say okay I want to go the next step I enter some more data so OTC stuff that's me or in this case we check all the details and then we give you the OTC compliance logo and you can use that. So very very very very important conformant equals compliant equals passing the test plus the license fee. You cannot say that you're conformant or compliant if you have not the OTC certification logo and please don't do it because that makes the job easier because if you say you're compliant you have not passed the test I have to call you if you're based in Europe and then we have to go through the entire process again so yeah I would be very appreciative if you don't do that. If not you still can say we're implement OTC standards. Good that's a list of available tests you can go through that that's the plans for expected tests that come out in 2016 and then for those who have to do tenders have you published tenders we have developed a guide, a white paper for a compliant guide for software acquisition white paper so if you want if you have to write a tender and you're looking for the right correct language then you can find that on our website. So that was the theoretical background and I would like to hand over to Dirk from Ladlund because he is working very closely with Louis Bemundes. Hello I'm Dirk Stenger from Ladlund and currently I'm working together with Louis and currently as part of the project testbed 12 or the testbed 12 initiative we are implementing a WFS2-0 reference implementation. But first I want to give you a short introduction as I already said I'm working at Ladlund. My main areas of work of course focus on OTC standards and we also developed the degree software which is providing OTC standards like WMS, WFS, CSW on an open source basis. Also a member of the side team which is led by Louis and my part is that I'm the technical lead of the WMS, WFS and WCS test suites. ETS stands for executable test suite and so if someone is reporting a bug to one of those test suites I'm taking a look at it and try to solve it or give you advice and of course I'm also maintaining those test suites. This is just a slide of what the OTC testbed 12 initiative is. So in general sponsors document their needs and create projects out of that then participants can apply for those projects and we are one of those participants. Now we are implementing the use case which was defined by the sponsor and of course the sponsor also pays money for that. That we are doing the implementation. For example participants implement existing OTC standards or develop new prototypes. OTC testbed 12 ends at the end of the year. Our project of testbed 12 that we developed the WFS2.0 reference implementation. The main objective is that our implementation should be comply with following conformance classes and you can see a list there. As you can see there are really advanced features there. For example feature learning or managed as thought queries is hardly implemented yet. When the project is finished we will pass all those conformance classes. There is also a corresponding project which advances the WFS2.0 test suite. This is also part of the testbed 12 initiative. This project concentrates on the same conformance classes but not the reference implementation is created but the corresponding test suite. The process of implementing the reference implementation has a lot of communication and coordination with it. The other participant is advancing the test suite and then we can enhance our implementation with this improvements. Of course this leads to a lot of communication between us two. We develop our own integration tests with SOPAI and all those tests concentrate on the WFS2.0 specification. Also as I already said the main part is the discussion with other members and we have questions like how clear is the specification or does the specification allow multiple ways of implementing a new feature. Sometimes the specification is not clear enough that everyone knows how should the test look like and the reference implementation. Of course we also detect bugs in the test suite and we report them and this also leads to better quality of the test suite. So are there any questions from the audience? I don't know. There is some special conditions for compliance if you are open source and the reference implementation because the fees between something 60 and 11,000 euros I don't know what it was. I think you did the PyCSW implementation. Can you say something on this? So for PyCSW we did the reference implementation following the catalog 3 version of the standard. So in our case we didn't have to pay any fee for that because we were also involved in the process of testing the test suite. So it was like we were implementing the standard but also the test suite was in the process of being implemented. So we were following the beta versions and every time the beta version of the test was released we were also testing our beta version and we were able to find problems, issues, bugs in both the site test and our software. And also this is important why we have two reference implementations because if there is a conflict between those two reference implementations then the test will not be specific to one reference implement because there are some aspects that one has to take a technical decision and having two reference implementations is very, very important so we have more consensus. But yeah, it didn't require any fees from our point because it was an open source project. It was released actually the day that Odyssey released the final version of the standard. We released the same day because we were, the slide that Athena saw before has this process that takes long to release the standard so we were waiting when the standard is going to be released to release actually our software. So yeah, but it was very, very important for us to be able to work with the site team to be able to comply with the test early. And thank you. We have room for one more question. Is that somebody? Not really. So as a software developer I really appreciate the work here because as a software developer I don't like to read thick documents but what I do know is test-driven development and this allows me to do test-driven development. I can just do a bit of development and test against this and know that it works. So very thanks for your work. Also, Clemens, the next speaker will also talk about testing specifications. So please bring back your slides. Thank you very much. It's a huge topic and we can only have a spotlight into that. And yeah, thanks for the opportunity to speak here.
This presentation focuses on the role of reference implementations for the OGC standards process and gives an insight in the related OGC Compliance Program. OGC Standards are developed in the OGC Standards Program and follow rules and guidlines that have been set by the OGC's members. Before an OGC standard is adopted by the OGC membership, a public review period is required where non-members can also contribute. Increasingly, some standards working groups also decide to work more openly in the public (like the GeoPackage work group did) to be more inclusive. OGC interface standards also come with reference implementations. The OGC rules state that these have to be "free and publicly available for testing". Some well known OSGeo projects like GeoServer, MapServer, deegree and others are reference implementations of OGC standards. A Reference Implementation is a -> fully functional, licensed copy of a tested, branded software that has passed the test for an associated conformance class in a version of an Implementation Standard and that -> is free and publicly available for testing via a web service or download.
10.5446/20427 (DOI)
Okay. Apparently, I'm wasting time. That's good because this is a developer kind of talk. So welcome. This talk is getting it done at location tech. How many people have heard of location tech? Hooray. That's about half of you. For the other people, you'll learn a little bit about what location tech is. But mostly, you'll learn a little bit more about our programming culture because, you know, programming is, you know, a fun activity. So welcome to this talk. Like to introduce my colleague, Tyler Battle. He's a software developer at Boundless North in Victoria. And Tyler's been joining me for a year, year and a half. Almost two. Almost two. So November. Wow. I'm shocked. Okay. Tyler's been joining me for almost two years, working on the GeoGig project, and it still hasn't made a release. No, it has not. Can you work harder? Yes. Okay. We're almost there. Excellent. Excellent. Super close. And my name's Jody Garnett. After picking on Tyler, I'll let him introduce me. This is Jody Garnett, who is a community lead at Boundless. And he's working on lots of projects, including Udig and JTS. And other stuff you're doing. Oh, you're on the PSC. And I made you do this presentation. Yeah. And you made me do this presentation. Yes. Excellent. This is how you can tell I'm a community lead. I volunteer people. Excellent. So we work at Boundless. Boundless is one of the sponsors of this conference. And Boundless works with Location Tech. It's a member of the Location Tech Working Group. And yeah, they pay us to do fun stuff, like talk to you today. So while I'm really happy with Tyler, because who wouldn't be? We've got a couple of people who we were that couldn't be here today. And sadly, these were the people I was really interested in hearing from because I hear from myself way too often. And Tyler as well, about once a week. So Rob Emanuel is from the GeoTrailist project. And James Hue from CCRI is from the GeoMesa project. And these people are both kind of come from some of the cloud community projects that are taking shape at Location Tech. And they've both been making a lot of progress this year. And I was really hoping to get a status update from them and hearing how it's going. One thing that's really fun about this is you see that they've got a couple of projects in common, like SF Curve. Now, even though these people come from different organizations, they're working at Location Tech, and they've been able to set up some joint projects together in order to share a library about space filling curves because who would want to write space filling curve twice? Yeah, so I'm really sorry they are not here. So Location Tech is a working group developing advanced location aware technologies. And it's part of the Eclipse Foundation. The Eclipse Foundation is most famously an ID for developers, but it is also a software foundation for people working on commercial friendly open source software like AssetBoundless. Now, Eclipse is a not-for-profit member-supported corporation. So here's what Eclipse looks like from our perspective as developers. It's kind of up towards the top of the slide and over top of this little hat thing. I think it's a mustache. It's a mustache. Okay. And Eclipse has a number of working groups. So on things like science. Internet of Things. Polarisys. What's Polarisys? That's the embedded. Embedded? Yeah. It doesn't say embedded. It does not say embedded. I have no idea why it's called Polarisys. And Eclipse Automotive, I can understand that. Yeah. It's like you take your Eclipse and you drive it somewhere. Yeah. Excellent. And LocationTech, which doesn't say, well, it says location and technology, so that's good. But that's a pretty advert time. That's a good description. Yeah. Here's our LocationTech members, IBM and Oracle and boundless. I've heard of them somewhere before. Red Hat and Google. And we've got a whole bunch of participating organizations Cardo to B's CCRI, where Jim is from. Vivid Solutions, Ordnance Survey. We didn't update the Cardo logo. They changed their logo. Why would they do that? Because it's Cardo. Okay. Okay. Sorry, Cardo. And then we've got a number of guest memberships. OSGO is a guest and several universities. From our perspective, LocationTech is working for developers. This year, we actually have had our first project actually graduate and make a release. So from a developer's perspective, as long as we're releasing, everything's okay. Made it through earlier this year about January. It took a little while to figure out the technicalities of how to make a release. So since this was our first official release, we had to figure out how to do things like how to sign the artifacts. So they said, add the metadata that this was from LocationTech. And also, once we had everything built up, how to actually start the release review. So it could be reviewed and checked and be made an official release. But yeah, they sorted that out. The GeoMesa project has also succeeded in making a release. And they made a lot of progress at one of these incubation sprints, which we set up. And they worked, Jim in particular, has worked really hard to get this release out. This release is based on GeoTools 14.1. And one trick that Jim did was he combined forces with Udeg and GeoGig and others in order to go through and review the GeoTools code base. And GeoMesa project's been super active this year and is balancing official releases with these patch releases. Yeah, so the patch releases are a bit easier to put out. You don't need to go through as much formality for those. So they've been doing a good job of keeping pretty regularly every few weeks getting patch releases out. So that's been really good. Okay. Another project that's been making a lot of progress is JTS or the JTS Topology Suite. Who here is like, use JTS? Who here is using JTS? Who here is used like Geos? Who's used like QGIS or PostGIS? Or any open source spatial software ever? Excellent. It all comes down to this one project that implements geometry, which is kind of like the rocket science of our spatial open source industry. So JTS has been ported near and far into different languages. And it's the geometry, it's the shape of our industry. One of the things is when we did an incubation sprint in Victoria, all the other project leads because we're so indebted to this project kind of dropped what we were doing and we stepped in and helped this project. So we helped migrate it to GitHub. I think it was on SourceForge. Yeah, it was. SourceForge is still a thing. It is apparently still a thing. Okay. And we also helped reviewing the headers because the project's re-licensing from LGPL, which is terribly unfashionable, to BSD and EPL. We also helped migrate it from Ant to Maven because like Maven is so 2003. And for our pains, we were rewarded with committer status. Now that said, the JTS project is making lots of progress, but it's currently stuck on re-licensing. So we need to contact a few prior committers and say, yo, are you okay if we change from LGPL to these other things? So when you're starting a new project, it might be worth considering having agreements that allow the project to re-license so you don't have to go through all of this. Just thinking going forward. You know, if you want your project to last over 10 years. Yeah. Yeah. Do you want to talk about it? Plan for success. Plan for success. So GeoGig is a project that does versioning for geospatial data. It's been going for about two years now and it's made a lot of progress recently, but we've been having some issues getting to a 1.0 release. We were using BerkeleyDB as our main data store, which is a problem because it has a somewhat dodgy license that isn't quite open source. So we've been in a holding pattern waiting to get some alternatives using either SQLite or Postgres. And those should be ready quite soon. We have an RRC3, which just came out a week or two ago. And that includes a bunch of new features, including some work on the Post.js background. And yeah, we should actually be getting pretty close, but one of the top things is when you are going through this process, you do have to go through all of your dependencies and make sure that there are no licensing issues. So sometimes it takes a while. And sometimes I would say there's a correlation between code quality and the level of licensing adherence. So sometimes it's good to have to go through and tell things every now and again. In particular, the GeoGig process taking so long, the version of GeoTools changes every year, which means that Gabriel needs, well, or Tyler needs to submit a new copy of GeoTools out. If you have any more interest in GeoGig, there is a talk tomorrow. Udig, so Udig is a project which I'm the lead on. And we had a pretty good year. We had a great time at EclipseCon Europe. They had a location tech day. And we've been looking at a few things, such as taking, making a cut down version with less dependencies. And we've also been combining forces with GeoMesa to share some of this effort. One thing that's kind of needed here is we need a replacement for the Java advanced imaging image processing library we need because it's not open source. It kind of came out of the box with Java. So we're gonna need to write ourselves a replacement. Or we might be able to treat it as a works with dependency. So if you've installed this image processing library in your Java, Udig might be able to pick it up. This works with approach is really helpful for things that are not open source like database drivers, for example. Let's move on to finite location tech. We had a great time at the Philadelphia code sprint. This was an OS Geo code sprint with a really strong mix of OS Geo location tech projects and the cesium team. It's an OS Geo event sponsored by location tech and organized by one of our member companies at Azavia. And I went there to work on Udig and spent most of my time doing GeoTools outreach. So here's a bunch of cheerful people. There's Jim. There's Jim cheering, he looks very happy. Yeah, there's Rob in the corner working. There's the map server team and the other corner not working. Oh wait, one of them's working. Okay. We also had a really interesting for us conference that Phosphor GNA in 2005. So this was a joint EclipseCon Phosphor G event. And it was really fun to kind of meet real Eclipse developers because we feel a bit like imposters. Because we're here to work on stuff. I will fix this. I'm fixing the slide right now. There we go. Okay. One thing that was interesting is there were a lot of them and not all of them were like compiler language geeks. And we really had a great time hanging out with the science working group. And they kind of felt like our people. We also suspected that they had more fun at our talks than we had at their talks. Yeah, a bunch of the internet things people were really excited about the mapping stuff too. Yeah. Yeah. Yeah, so it was a really great event. And here's an example of them having fun at a Planet Labs talk. We also have like a location tech day. And this was part of EclipseCon Europe. And I was really surprised and shocked at how much larger the Eclipse community is in Europe than in North America. While our talks went well, the highlight for me was kind of crashing the science working group meeting and seeing some of the fun stuff they're doing and catching up with Frank Gastorf who's one of the UDig committers. The science working group was kind of fun. We've got all our cloud geospatial stuff going on and Rob's talking about thousand node clusters and the science working groups looking at him and says, one of my nodes has a thousand, you know, course in it. So it was just a little bit of a different experience. So here's us kind of having fun, having beer, beer is a form of fun. And another kind of central event for us was an incubation sprint. So we kind of hijacked people being in town for the GeoServer Wicket Sprint this last January. And this was a longstanding request from our developer community. And it was only like three or four days, but everyone made so much progress, often by just having a chance to focus on what we were doing without all the distractions of work. JTS went from kind of like 10 or 20% done to 90% done just over the course of a couple of days and a lot of help. And GeoMesa really sat down and finalized most of their dependencies, really set them up for that first release. And GeoGig mostly went on a witch hunt. So it gave up things that weren't really very well done like the Osmosis library is kind of a mix between GPL and LGPL code when we went in to look at the details. And Udig, we did a lot of research. But yeah, that was a really helpful event because we, sometimes it's hard to get motivated to do some of the IP review. And it was really good to get everyone together so that people who had done it before could mentor people. So JTS was amazing. It went from absolutely no work done to almost fully through in a couple of days. It was pretty good. Safety in numbers. Yeah. This is one of these rare pitches of you, Tyler, without a beer. I know. I know. Yeah. Oh, and here, this is where we took down GitHub. That was pretty good. So just for our final day, GitHub crashed. So we all had pink unicorns. It was the big JTS commit is what took down GitHub. I'm sure of it. I'm sure of it. Okay. Just a bit of how GitHub functions. There is a technology, PMC, top level project. What does that actually mean? Well, the top level projects kind of are used to group things. So as location tech grows, we might split this up into like libraries and processing and things. But what does this actually mean? It means that the leads of the software projects have a hangout once a month, and we see how we can help each other out, and we send the notes to the email list. That's been really strong community building activity. In terms of helping people out, it might be something like GeoMesa wanting to use a new version of a cumulon. I could check that out and confirm it was open source and stick a plus one on it. The other kind of things we do is we do new committers and so on. And this is actually done not with like an email list. It's actually something that's kind of automated on the Eclipse website. And it's a little bit tricky. You can kind of go through this website and have the voting approved and have the steering committee approved. But we also need to ask the candidates to sign some paperwork, that contributor license agreement kind of thing you mentioned earlier. So this is where JTS ran into trouble because the lead Martin didn't have control over the code base. He wasn't in position to change the license very much. And that's kind of good. The other thing we do is we end up starting new projects. So we saw that space filling curve project start, but you can go through here, if you're looking at joining location tech or if you're looking at starting a new project from scratch, you fill in a form, you send an email to emo at eclipse.org, which I have no idea what that stands for. Eclipse Management Organization. Eclipse Management Organization. You'll have lots of acronyms to learn. Yeah, for a creation review. And that kick starts a process where they check to see if there's any trademarks. You might have to answer a few questions. And at the end of that, they provision you with infrastructure, but we mostly all use GitHub. Yeah. So I had the pleasure of submitting the raster processing engine, which I hope will be a replacement for Java advanced imaging. I submitted that last week and I was hoping it would get approved this week. It's public now. It's public now. Excellent. That's very exciting. Late breaking, you can read that thing on the internet. In terms of outreach, location tech does all kinds of outreach from sponsoring events like this. One activity we really enjoy is the location tech tour, which rather than have a big conference like this, takes the speakers on almost like a road show, visiting smaller groups. And we really enjoyed a location tech tour stop we did at the local Victoria University. It was really fun to be in a Esri Center of Excellence, introducing students to open source for the first time. Any questions? If you don't have questions, I'll just stare awkwardly at Tyler and we'll see what happens. Hey everyone, hey Jody. Just this is a comment, not a question. So you may splash up the tour site with eight events. Thea let me know that there's gonna be at least 15 events this year. And if anyone here is interested in having an event in their town, reach out to myself, Andrea Ross or Thea Aldrich. We're here to help and we'd love bringing the tour events to new cities. I would have put that contact details, but this was a dev talk. That's can organize things. That's can organize things sometimes. Yeah, so first of all, I'm very happy that we, that you're working or that you want to work on the JAA and the Java advanced imaging problem. I think I've, I just searched back for it. I started like complaining about it six years ago on a mailing list. Anyway, wait, does that mean you're volunteering? No, I don't know. No, but I wanted to say the opposite. You're still up by a job title, community lead. I volunteer you. So no, what I wanted to ask is, okay, is there anything like public where I can point people to? Because I know there are a lot of non-JAS projects who actually have the same problem. So we are really setting it up to not be specifically to geospatial. And we're doing that by interacting with the science working group and other interesting parties. And Andrea Ross just mentioned that the little project proposal is now public and you can see it on the Eclipse website. Okay, so I can point them there. If I was especially brave and handed this mic to Tyler here, I could go to the website. Uh-oh, we're going live. I'm worried. While we wait, are there other questions? I'm just curious. Just out of curiosity, since I know that you work with both organizations, Jody, I know that it seems that because there are fewer projects currently on location tech, that there seems to be very tight knit, which is lovely. Is that, I mean, correct me if I'm wrong, but it sounds like there's a little more communication between the projects because it's just a fewer, I mean... There is a smaller number of projects and often we are kind of learning the ropes together. It's also very well set up to help mentor project teams that are new to open source. And so I quite value that experience. It's one of the reasons why I take part. I should also point that the structure of location tech is organized with members on a steering committee, but the committers do have a voice. And so I am one of two steering committee representatives... Sorry, two committer steering committee representatives on that steering committee. So the committers, even though they're not paying members, they do get a say in what's going down. Yeah. Any questions? Thank you guys very much. It's so nice to do like a dev focus talk on location tech. So I thank you all for the opportunity to speak. Thank you. Thank you.
LocationTech is a working group developing advanced location aware technologies - which tells you exactly nothing about what is like to join LocationTech and get things done. That is what this talk is for - bringing together several project leads from the LocationTech stable to cover: How LocationTech is organized How project promotion, marketing and fundraising works Running a project in terms of committers, license selection and transparency Starting a new project, incubation and release This talk provides a background of LocationTech and we can answer your questions. The real focus is on covering the project experience as a developer. In the past we have focused on a lot of the great technology taking shape at LocationTech, this year we would like an opportunity talk about the people, our culture and the cheerful attitude that goes into getting-it-done.
10.5446/20424 (DOI)
Okay. We're going to start. So please welcome, well, Jody and Andrea. Welcome to the state of GeoServer 2016. Andrea and I were just talking and it actually appears we've done a fair bit of work this year. So this is our chance to tell you all about it. Andrea is a technical lead with GeoSolutions and he's been a long-standing leader on the GeoServer project, OSGEO Charter member. I'm also a long-standing member of the GeoServer steering committee and you might see me at a few other talks. So GeoServer 2016, what have we accomplished? For those of you new to GeoServer project, it's a Java application server that really focuses on sharing your geospatial data easy. Make use of a wide range of standards to help you publicize your data. In terms of the health checkup, our community has been fairly static this year. We have increased our number of committers from 28 to 37, but our mailing lists and our pull requests are holding steady at around 600, around 500 pull requests a year. We do have a slightly smaller active base of contributors this year, but the pull requests are still going strong. Our code base is really active and very healthy. Andrea and Amy has also stepped down as OSGEO project officer and Simone has volunteered to take on that role. We are keeping up an aggressive release schedule. For those of you new to the project, we have a little bit of a staggered release plan. So each release spends six months as the stable release and then it receives a further six months of maintenance upgrades. We do have one glitch in our release schedule this year. We had about a two-month delay and I'll talk about the reasons for that a little bit later. So our GeoServer 2.0 release was delayed by about two months. Also in terms of the project, we've had a long-standing overhead where a lot of the pull requests that come in, we actually have to reject and say, hey, you know about those headers, can you please update them? And it's been a significant 7.7% of our feedback this year. It's just been asking developers, please try again. We talked to the OSGEO legal advice, thanks to the OSGEO board for making that happen. And the practice on updating your headers, we can actually relax a little bit. Our practices were set before the US joined the Bernier convention. With that in mind, we have a slightly updated header policy. This is actually something we're not going to talk about very much, but it has been the major work that's gone into the project this year, is focused on maintenance and technical debt. The GeoServer project is 10, 12 years old? Probably more. Probably more, okay. And over that time, we've developed a large code base that we've had to, well, we always have to work on bringing it into the future. One of the things we've done is it's become impractical to rely solely on volunteer time in order to keep track of all the issues that come in from the community. And we're trying an experiment of setting up a monthly kind of bug stomp in order to gather some of our new developers and teach them how to look into the issues as they come in and see if we can be a bit more timely. We tried this for the first time in July, and we're going to be trying it again this weekend at the code sprint. For our first one, we managed to close a large number of feature requests and wishes for new features that had been open for over five years. I think if a feature request hasn't attracted funding in five years, it's safe to let it to say nobody is interested. Yeah, so this is an experiment we're trying. The key maintenance activity we took on this year was upgrading our user interface from Wicked 1.4 to Wicked 7. And this was actually a major undertaking. We flew our GeoServer development team from all over the world to Victoria, Canada, and we had a really organized sprint, and we were successful in making this upgrade. The other kind of challenge, I guess, is we did update to Java 8 this year, and that's simply because our customers asked us to. Java 7 no longer receives kind of free security updates, and so our customers are migrating to Java 8. As part of that, we had a surprise, and this was responsible for that two-month delay. The library, the spring framework that we used to glue GeoServer together wasn't compatible with Java 7, so we needed to update to spring 4. I'm just going to hand this over to you. Yeah, thank you. So just a quick enumeration of the various features that we are working on, our improvements. First, the GeoPackage module is a community model right now. We are going to push it to supported status by increasing its testing and making sure it's compatible with the OGC test suite. We updated the double-FS cascading that is the ability of GeoServer to act as a client to another double-FS server. We had an old code base that was showing its age. We made a number of improvements and tested it further, and it's now ready for consumption. In terms of raster data sources, we improved our masking support, supporting vector masks and raster masks so that you can cut your raster data, I don't know, cut out clouds and stuff like that. We added, we made the mosaic more powerful in order to support mosaics made of mixed color modes like gray, plated and RGB, all in the same mosaic. We also added support for multiple projections. So you can actually mix together images in different projections in the same mosaic. That's going to be released in GeoServer 210. Thanks, Devon. Next. Also, the level of the image mosaic, we are going to start working soon, like next week, on excess granular removal. Say you have a very deep mosaic with many images overlapping each other and you are controlling stack in order by recent C resolution, color versus gray, whatever. So you have a dynamic stack in order and you want to avoid opening images that are not contributing to the output. That's what the excess granular removal is going to do. We are going to support multiple coverages for the mosaic. We already do that for net CDF and the pyramid. That's something we don't do yet so that we can make pyramids of complex structures. We are going to optimize the coverage views so that when an SLD chooses to only fetch one band out of the coverage view, we are not going to read the others from the file. So speed up. And we are adding support for rotated pull projections. Ben did this. And it's nice because to support this, Ben actually had to contribute to two projects, to the net CDF Java libraries first, so the base library that we use and then to GeoServer to add the support for this projection which is used in weather forecasts. Styling, we add support finally for the perpendicular offset for line and polygons, that you can efficiently offset a line. That's part of SLD11, but of course we support it also in S&D10. We are adding some NCWMS-like extensions to get maps so that you can just stick a palette, just an enumeration of colors, and then have GeoServer dynamically apply that palette on your raster based on their minimum and maximum values. We added support for it in animations. You can tell GeoServer to support that in logarithmic scale instead of a linear one. It's very nice and good for meteorological data normally. Next. There's some good work from Torbin from Boundless. Basically you know that the CSS page has this little play and preview mode that makes nice to edit. We are moving that to the main style page so you can actually type in your style and preview directly in the page without having to save, go to the preview and so on, so it's much quicker. We have some CSS features, some GeoCSS features coming in. GeoCSS did not support rendering transformation, but we are adding it. I'm also working on rule nesting, which helps to make the style even more compact. We also have a new language for styling, which is YSLD. It's a derivative. Well it's like SLD but in a YML source. So lots of boilerplate removed. It's much more compact and it's one-to-one compatible with SLD, meaning that you can take an SLD and turn it into YSLD automatically, which is nice. We added support for custom legend graphics. You know that GeoServer has a get-legend graphic support and it builds the legend for you, but sometimes the job is not done very well or you might want to have your own custom legend. You can now upload a static image to do that. In terms of WMS, we are adding for GeoServer 2.9 and 2.10 a new JPEG or PNG format for imagery that needs to have transparency. So right now you are stuck to PNG, which is big. This format basically checks if your image has actually any transparency in it and it will encode it in PNG or JPEG depending on whether there is transparency. We added support for UTF Grid in WMS, some up-box style UTF Grid, with a bit more flexibility compared to the basics text. So we support the non-basic tiles and it's available in our projections. Of course it's up to the client to decide whether they want to support this or not. We have a community module generating vector tiles. Those can be cached and the UTF Grid could be cached as well so you can do, let's say, Mabbox without the Mabbox by using this integration in GeoCache. And Dave Blasby assures me that's going to become an extension for 2.10. Right. That's great. In terms of a legend graphic, we also added the ability from the client to control the layout of the legend so that if you want it in a single column, in a single row, two columns, two rows and so on, you can now specify that. In terms of WFS, we are breaking the limits of shapefile sites. So someone tries to make a dump in shapefile from WFS that's bigger than 2 gigabytes you are asking for trouble. Right now the code recognizes that and will just start paging and generate multiple shapefiles instead. This is not exactly GeoServer but a related project. Hale is a mapping tool. This is a desktop tool. And we worked on making it support our up schema mapping files so that you don't have to go crazy and edit XML mapping all day to get complex feature support. Also in terms of complex feature support, when we are joining tables in the database to build a complex feature tree, we are now able to also send down filters to the database for joined tables which we weren't able to do before. In terms of tile caching, we added support for MB tiles. You can use a single MB tile like totally compliant but we also added a few extras such as having multiple MB tiles, storing non-Google mercator projections in it, storing more formats than just PNG or JPEG. Of course, if you use these extensions, you are not going to be able to use those MB tiles in other systems. The WMTS service now has a config page which is nice because it means we can add inspired extensions to it. And which means that we finally can do WMTS inspired compliant in-geo server. We are also working on a end discovery extension for multidimensional data. Sometimes when you have scattered data or data that has dimensions which are related with each other, like in a forecast, runtime and prediction time are related with each other, it's difficult to find combinations of values that actually provide you an output. We are working on a little specification to allow clients to explore the end-dimensional space. We have a link there. Please have a look and tell us if you like it. WPS work. We have done some work in this area. The group by process now can... Well, yeah, the aggregation process now can do group by and you must say, well, this is boring. It's not boring, but it allows you to power nice diagrams and the client from a WPS request. We improved the resource control. We now can teleport the queuing time and the execution time for asynchronous processes. So we have a connection pool there and some processes can be stuck in the queue for a while. So now we can control better that. We made some improvements to the download process which is a community module. Also we can now have... It was already there since last year. Now we can select bands when we are extracting raster data and what's more interesting is the next slide. If you are using that process in hunger to make very large extraction with a very large number of clients, we made a number of optimizations to improve the ability to extract large images at scale on big machines like we made a test on a machine that had 40 cores and 128 gigabytes and we found some scalability issues that you would not see on a four or six core machine. So configuration and management, let's move on. You want to talk about this? Certainly. So this is some work that we worked on at boundless with Niels from Belgium. This has been a long-term strategic play for us. GeoServer for the longest time has been tied to having a file system to store its configuration. We introduced an API last year in order to allow GeoServer to store all the imagery and fonts and small configuration files and have a choice of where to store that. We added this to the code base last year. This year we were able to implement JDBC store which lets us store icons and so on in the database as JDBC blobs. One thing that's really nice about that is there is a REST API so you can manage your icons and fonts finally using the REST API which is great for automation. But there is also now a GUI that currently is a community module. We hope to add that into the mainline program later. But this allows you to manage your icons from the GUI if you don't have direct access to the machine that your GeoServer is running on. The other thing that we added, the Victoria office team, we introduced a status extension page for the REST API. This allows us to check the GeoServer configuration which is really helpful for automation. I think Morgan did that work. Thank you, Morgan. Right. So parametric configuration. It happens when you are working on GeoServer that you have several environments in which you are working in like a test environment and a production environment. Sometimes there is more environments and each environment has its own set of connection parameters to the databases. So you need to connect to different hosts on different ports with different user name and passwords and so on. Right now it's a bit of a pain because you switch the data directory from one environment to the other and then you have to fix somehow all these connection parameters. In GeoServer 210 and we plan to backport into nine, we are going to allow you to set variables in the configuration and then look them up from a property file or some sort of other configuration source so that you can literally just switch the configurations around without having to manually change the configuration. That goes hand in hand with a new module which is a backup restore community module which is going to perform a full backup of your configuration and restore it in a different environment taking into account the variable if you need be. The operations are going to be fully async controlled by a REST API. You can do a dry run so you might try to do a restore without actually doing the restore but just trying it out and see if there is any problem before actually running it. And there is a full REST API to control and run the backups and restores so we have a UI as usual but we also have a REST API for automation. In terms of security we have a new LDAP user group service so if you are storing your user in LDAP this is a new opportunity. We already had an authentication system. This is a different way to interface with an LDAP so it's always nice to have a choice. If you know about Geofence there was a presentation this morning about it. We have now a way to run it inside your server as opposed to a separate server which makes it easier and we have a simple user interface to edit the rules and control the orders. And we also added the ability to control the admin rules. Admin rules are the rules that control who can manage which layers so it's administration control not data control, data access control. And back to Jody. So I get the fun job of talking a little bit about some of the R&D we have got going on in the GeoServer community. One of the projects that is near and dear to my heart is the location tech project GeoGig. The GeoGig team has actually put together a GeoServer community module which allows GeoGig repositories to be integrated into the application so there's a new screen there to configure your GeoGig repositories and then you can publish those out as individual data sources. So you can actually publish out the same information twice, one on the master branch, one on like a test branch and make that available to just normal WFS clients. There is also a GeoGig web API so you can control and interact with some of this repository management stuff for automation. Looking ahead to the next year we have got some R&D that we can see into the future. One of the things on the horizon which I actually scheduled for the code sprint this weekend was looking into Java 9 compatibility. I thought we were going to have to pull this off by October but it looks like the Java 9 release has been delayed. Nevertheless, if you are interested in helping out this weekend please join the GeoServer team for that. We are always welcoming volunteers. The other one is a little bit of a tougher problem. As you well know GeoServer is a Java application and an open source application. When we initially joined the OSGEO community it caused a little bit of heartburn because Java was not open source. Java was controlled by Oracle and OpenJDK did not exist yet. That was troubling to some members of the OSGEO community. With OpenJDK now being free and available we can now stand out GeoServer entirely on an open source platform which is amazing. Except for one small bit. We make use of an image processing library called Java Advanced Imaging. This was part of the original Oracle or Sun Java. It is still produced under a binary license. That means we are not 100% on an open source platform. That is something we would like to fix. We do know other people that really value open source freedom. We are putting together a joint initiative with LocationTech and OSGEO in order to create our own raster processing engine. This will be a major undertaking at both foundations. It will be taking place over the next year. We are going to lean on the LocationTech IP team to make sure this new library is really free of any kind of incumbents. We will be working hard with the GeoTools and GeoServer team in order to successfully migrate all of our projects over to this new engine. It is important to note that we have to do that because JAI, besides its license, is actually a great library for raster processing. The Ferred loading tile-based computation, it has its own tile cache. It can support concurrent tile calculation and the like, which is something you normally don't find in open source Java raster processing libraries. That is why we are going to do this work to retain all these abilities to process very large rasters without ever having to load them into memory, which is of paramount importance for a server which is handling hundreds of concurrent requests to users. I think that's it for our presentation. How are we doing for time? So we have about six minutes for questions. So if you want to ask a question, please raise your hand and some of us will come with a microphone, otherwise we cannot record a question and people in the audience won't hear it. So are there any questions? We might be off the hook. This room is always intimidating because it's such a big room that people are scared to ask questions. I think it's the podium. When is the next release actually planned? It's a good question. It was on my slide. The next release, because we suffered this two-month delay, nevertheless we really like to keep to our regular release schedule in part because it matches our business cycle. It's very handy to have a release scheduled in February because we don't have a lot of customer work slowing us down. Rather than continually have this two-month delay, we're going to claw back a month at a time. So the next release of GeoServer is scheduled for October, I believe. And then the following release in the spring will be back on track again. We're going to do two short cycles, two five-month cycles instead of six. Thank you guys very much, and please keep enjoying using GeoServer.
State of GeoServer provides an update on our community and reviews the new and noteworthy features for the Project. The community keeps an aggressive six month release cycle with GeoServer 2.8 and 2.9 being released this year. Each releases bring together exciting new features. This year a lot of work has been done on the user interface, clustering, security and compatibility with the latest Java platform. We will also take a look at community research into vector tiles, multi-resolution raster support and more. Attend this talk for a cheerful update on what is happening with this popular OSGeo project. Whether you are an expert user, a developer, or simply curious what these projects can do for you, this talk is for you.
10.5446/20423 (DOI)
Okay, welcome to our second session today. We're going to have a presentation by Francesco Bartigliano on Enterprise Single Sign-On NGU Server. So I'll let him get started. Either way, in both ways. Thank you. Good morning, everyone. I'd like to thank you, Mauro, the previous presenter, to point out that security is hard. Today I want to speak about a little bit perspective on how to implement security in NGU Server. So I'm from GeoBeyond. We are a specialized company in geospatial solution and identity and access management system. We are a partner of Boundless Spatial as a solution provider for the OpenJio Suite. And we have founded the Rios, that is an Italian professional open source network with different companies that cover different kinds of stuff like portal, business intelligence, big data, and so on. So I would try to go very, very fast on the first part of this talk because Mauro has already introduced a lot of features of the GeoServer security model. So you heard that it is based on spring security and allowed to do access management with authentication and authorization. That is the two main phases in protecting our resource. So authentication in GeoServer is based on filters, provider, and chain, while authorization is based on groups, roles, and can be separated into data management and service management rules. And it is comprised for the identity management by internal provider and external provider. So let's have a look at GeoServer authentication. So basically we have filter. Filter can be a delegation to the serverlet container, can be anonymous, can be a QQI, like remember me, for authentication from a previous request, can be form-based, can be based on certificates. We can proxy HTTP either from an external system and we can have a GeoServer basic and digest authentication. And also we can have from either as well credential directly. Credentials can be, as I said, internal, like as Mauro told before, we can have user information from the basic, the base user group service with credential and user name, basically user name and password. And as well we can have external user information from an LDAP server where the user can bind for trust, username and password. And as well with JDBC database trying to connect to the user database. So again we have chains that challenge order filters against authentication schemes that we can name provider during an authentication flow. And the rule is that the chain handle the request as soon as at least one scheme succeeds in the pipeline. So let's have a look into filter against provider chain. Filter perform the, actually the authentication while filter select the specific authentication scheme to apply for a request. So basically, for example, if an authentication is actually required or not, and also filter can be separated by per request type with a matching rules. Rules can be described by the method, the HTTP method, a pattern and a regular expression for the query stream parameter. Authorization. So we have, as I said, rules. Rules are entities with a name, parents and a set of key value pairs associated to the privileged and the permitted resources for a user. And as well can be assigned to user and groups of them. Rules supports inheritance and we have in your server different reservoir system rule, like rule administrator, rule group admin, rule authenticated and rule anonymous. While for example authenticated specify just all authenticated users. Rule can be served by a different kind of persistence. Can be an XML file from the rule service. So just a file, a rule.xml in the geoserver data directory can be extracted from a JDBC database. Can be defined inside the deployment descriptor or the G2EE rule service. And can be sent from an LDAP directory server. Out in geoserver we can get rules. We can get from the user group service. So actually retrieving the active rule directly through the configured rule service with some limitation like we cannot have group membership and no custom rules and by using an HTTP either attribute. So in such case rules are received through an proxy authentication. So I can for instance, I can define my custom either, my roles with different role. So how many rules can I have in geoserver? We can separate and distinguish them by the management of data, so at the layer level and the management of the service. So the different type of all WS services like WMS, WFS and WCS. So data management provides security rule at that level. You can combine workspace, layer permission and rule. And also you can use the catalog mode while at service level you can define specific rules by separating OJC services. And also you can have a rest service specific rule as well. So this is for example the syntax of the layer security in geoserver. And as you can see here we can have permission for read mode, write mode and admin mode. And there are example for some defined rules. And while this is the syntax and example for the security service. While this slide explains how we can define with the syntax of rest services as well a rule that explains and sets the security at service level for the rest API. We can also have plugins. So Geofence and CAS for singles and on. Basically the main difference between Geofence and the standard security is that Geofence is an advanced security authorization system that allows rules that overcome the limitation for the combination of service and layer security. But I would like to end up to the point of this talk. I'm wondering and I have some doubts. So the first question is, I'm able to satisfy the technical security requirements for an enterprise single and on in geoserver with geoserver and collateral stuff that I can use in my infrastructure. Well the answer more or less is yes. But I have an additional question. Can I achieve a simple security model with a robust and clean governance? Well geoserver is a geospatial engine for managing geographical information system while security often has a specific software for managing identity and access management system. So basically I would like to point out some simple business security requirements. So the first is keep security as simple as possible. Have a team and dedicated infrastructure, I mean hardware and software for implementing identity and access management. And also probably the main rule is to control the governance over requirements and technologies used to adapt your system to the proliferation of digital identities. For instance I mean identities associated with Internet of Things which means in term of geospatial concept a massive growth of geographical information to be secured. And also another good reason to go through an external identity and access management software is to adopting a centralized rule driving security policy model that has a number of rules as lower as possible. So let's have a look how many and how can be the main concept in identity and access management. The user management and its lifecycle for the provisioning of identity has to be central in the enterprise. So security has to be a serious topic. You have to expect to challenge with different kinds of security mechanism like authentication, federation, social authentication, mobile security authentication, password less and user managed access. And also you have to expect to challenge with the management of the provisioning of identities that is a different task from authentication and authorization. So enforce the separation of the authentication phase from the authorization and apply and consequently apply the policies, different policies from authentication and authorization. Give different strategies for your course-grained and fine-grained policies in a model and plan and design your rules before that their number will become ungovernable. So let's have a look and I would to introduce to you the Forge Rock YAM platform that is composed by four components, OpenAM, it's all-in-one access management solution, you have authentication, single-sign-on authorization, federation and web services security, you have centralized workflow and provisioning for the identity of users, devices and things with OpenIDM and you have a standard and with a cool feature because every service is restified in OpenDJ. This is a big picture of the standard architecture of Forge Rock so you can have a look and you can see how the main concept is that this platform is stood as a modular platform so you can compose by choosing what you request in terms of identity and access management, the components of the Forge Rock Suite that you need, actually you need. So OpenAM is a component specific for the access management, you have authentication, single-sign-on, you can manage with different social sign-on providers, Facebook, Twitter, Google, GitHub and many others, you can have a strong authentication method, you can have multi-factor authentication, adaptive risk, federation for cross-domain, authorization, user-managed access, self-service management and the audit that is probably, yeah, thank you, the question that before someone asked about logging what the user that is actually access results can be traced. This is, yeah, the focus on the main components of the OpenAM and how OpenAM can fit with GeoServer. Well we have in OpenAM two components, a web policy agent that is mainly a web server component that can be installed in your web server and you can have the possibility to use a Java Enterprise Edition policy agent to be rightly, directly installed in your application server. So basically a web policy agent can be combined with the HTTP either proxy authentication in GeoServer for the authentication and roles can be sent to an HTTP either attribute. While the Java policy agent can be coupled with the J2EE role service in GeoServer. So you have different possibility to integrate OpenAM with GeoServer. So how you can start with experiment this integration. I have created a Docker container. You can clone repository, YAMON for prototyping. I will give you how this name has been used. YAMON is a Japanese name that means the guardian of the gate. And this work is donated by this dog. So thank you very much for attending. Thank you Francesco. Questions? Hi. Assuming you have a Go server, how would you do for instance SAML or authentication? How would you do that? Yeah. OpenAM supports also SAML. So you can define your authentication method in OpenAM and inject a SAML authentication into GeoServer with an HTTP either. Any more questions from the audience? You told us about OpenAM. You also have expertise with other single sign on things like Shiboli, which is one of the SAML 2 standard implementations I guess. Not so much. I have expertise with Oracle implementation. That is not so good right here. But yeah, more or less every access management software more or less have the same rules to manage authentication and authorization. So probably the approach that I described here could be applied as well to Shiboli. I don't know if Shiboli has an agent to be installed into the application server or the web server. Yeah. You mentioned about having separate teams for the security aspects. That being quite important. When it comes to authorization of different layers, the advantage of GeoServer having GeoFence is that you will have transparency in the sort of, I guess, of which layers you're securing. How do you go about basically coordinating between the model you use for the authorization and coordinating with what layers you'll need from GeoServer? Are you assuming to use GeoFence and use as well an external identity and access management system or you ask about? Just generally, how do you come up with the business rules that will manage which layers can be required if you're not using authorization, some authorization tools? Yeah. Basically, you have the layer in the URL and you can manage every authorization rule from that because you have in such tool, you have at your disposal a different possibility to define your expression to that match a single request. Basically you can define, I mean, any expression to catch your specific rule. Yeah, just wondering about the, not just only layers, but you could use a GeoFence for a security to limit the data that gets passed, but how about if the user actually has partial, like only part of the data inside the GeoFence? Is there a use case for that? Yeah. Basically, it could be possible if you store your GeoFence data into an LDAP server and then connect the external identity and access management tool for the authorization rules to that LDAP server. It's a possibility. That's all the time we have. Thank you Francesco. Thank you. Thank you very much for attending. Thank you.
Security is a major concern in the enterprise and treats all aspects of identity and access management. Moreover the proliferation of devices and digital assets connected to the Internet of Things is a massive source of growing geographic information. GeoServer has buit-in a lot of features to manage authentication and authorization but often this kind of problem can be better dealt with a dedicated tool (i.e. Forgerock IAM suite) which allows to provide identities and access policies likewise to several clients. What are the best practices to integrate GeoServer into an existent single sign-on and identity lifecycle? Althought tools like CAS and GeoFence allow to enable such features it's more likely that GeoServer needs a leaner and cleaner path towards the externalization of authentication and authorization for the OGC services and its REST API.
10.5446/20418 (DOI)
Good morning everybody. My name is Manuel Grézonet. I'm from the French Space Agency. I'm working in Toulouse. And let's start the session. I think it's a nice way to continue after the keynote from the European Commission about the availability of lots of observation data. So let's start with OpenARL map. Hi everyone. My name is Daniela Silva. I work for Development Seed and I bring you OpenARL map, also known as Building an OpenSource Image Rebrowser, UX and Technical Decisions to Develop OpenARL Map. Yeah, I'm working on getting shorter titles for my talks because this is not going. Has anyone heard of OpenARL map before or the OEM project? Oh, wow. Okay. That's nice. Not expecting it. So I'm going to explain to everyone else what kind of problems we're trying to solve with this tool, why we decided to build it and then some considerations around decisions we had to make while developing it. So first things first, what kind of problems we're trying to solve? Why did we decide to build it? We noticed that there was a lack of imagery for disaster response and this kind of imagery usually needs to be as up to date and as available as possible. Users need to be able to find it pretty easily so to actually have an impact. So imagery from satellites and manned ARL vehicles and other types of aircraft become increasingly available after a disaster nowadays, especially because you have a lot of organizations and even individuals that just capture a lot of imagery that then share with the world. So what OpenARL map tries to do is to provide a simple way for users to search, find and then use this kind of image for humanitarian response and disaster preparedness. A good example of this, it's actually during the Ebola outbreak in Mamugini. You can see the progress of the OSM edits in the background happening and this was done using ARL footage. So 68 volunteers were able to map this big region in a very short time and this is very important for disaster preparedness because then the proper authorities can get to the different places, they can know what kind of facilities are available, where they are and this can actually make a significant change in people's lives and how help is got to a place. The OpenImage Network is actually the base upon which the OAM project is built. It's a network of openly licensed image using Creative Commons by 4.0. It's a distributed system, not in the usual IT sense of distributed systems but in a sense that any person or entity can actually host one of these OIN nodes and it's actually pretty easy to do so and to contribute with imagery to our system. Essentially we have two ways. First one is to host one of these nodes yourselves. You follow the instructions that are listed on the readme, you have to release the imagery under CC by 4.0 so it needs to be open and available for anyone and then once you add your information to the register you're done and it's available, it's out there for anyone to use. However, if you can't afford to host a node, if you don't want to do so, if you don't have the technical skills or the time to maintain one, you can still up us out. You can still contribute with imagery. You can use the uploader which is a form that we built and upload images to the hot OIN node. I'm going to get to the form a little bit further in the next slide. Okay, I've mentioned OAM, I mentioned Open Error Map but what is this? What am I talking about here? So OAM is actually a set of tools. So we have different little components and we use them to find and share the imagery that's available on the OIN. So the first component is the catalog. The catalog is responsible for indexing all the imagery in all the nodes in the OIN and then making it available through this powerful API. Supports filtering by different properties like a bounding box, the provider, acquisition date, basically any property that's available. However, this catalog, although can allow users to find all the imagery they're looking for, it's very geared towards power users. Not every user is able to understand an API response. Not every user is able to use an API. And so at this point we needed a way for the average Joe to be able to use it, to be able to query the system and get the information they wanted. So enters the browser as a solution. The OAM or the OAM browser provides or tries to provide a simple way for your common user to search, find and use this kind of imagery. Okay, let's see if we can get a little demo of this working. Okay, so I'm going to speak louder because I want to have a mic here. So this is actually the browser as it is right now. It provides a grid-based interaction so users can search for a specific location or they can just browse the world that will and find whatever place they're looking for. Each cell is shaded according to the number of images available and then you get a little number that tells you how many footprints are available in this specific cell. So let's look something up. So once you select something you get a preview of the footprint and then you can have a quick preview on the map. This image is a very low resolution one so it comes from Landsat 8 and for a satellite you get like a very, very big image. But I happen to know that we have very nice ones in the Philippines so let's go to those. Okay, so this is a nice one. As you can see it's long, a little spec because this image covers an incredibly small area. It's captured by a UAV but it's incredibly high resolution so 4 centimeters. This is very, very good for tracing purposes and so once a user has found what kind of image they're looking for they can either download it or they can use it directly in for example OSMID or Jawsome or just any software that supports TMSs. To be able to find images more easily and also to comply with the API we also support simple filtering like by time frame or by the resolution you're looking for or even if you want all images or just the ones with TMSs. So in brief this is what the browser does and how it looks like. And we're back. So the grid, the actual browser grid was the most difficult thing to do in the old project not just technically but also conceptually. We had to figure out a way to show a lot of imagery in a simple way but yet powerful. Some way that would not overwhelm the user. Our first try was to just place all the footprints on the map and see how this would work. So it didn't of course. It's a lot of noise. You can't really figure out where things end or where things start and you are not able to actually pick a specific footprint. We have different sensors, we have images covering different areas coming from satellites, UAVs, drones that then people stitch all the images together. So this kind of approach doesn't really work. And so drawing inspiration from the hex grid experiments of Torf.js and the battleship game actually we ended up playing a few rounds. We came up with this kind of core plot grid. Basically it's a cell grid that covers the whole earth and then each cell is colored according to the number of footprints that intersect it. So lighter colors, less footprints, darker color, more footprints. It's very easy for the user to understand. It's very easy to navigate because lighter areas less, darker areas more and it's simple. The user can just find the areas they're looking for and then you can zoom in and more granularly pick whatever you're looking for. So the idea was there. Now the building process. We ended up having three different iterations of this grid. So the first option that we started with was to draw a grid based on geographical coordinates. This seemed to work well but then we started moving away from the equator and we got rectangles instead of squares. And this was also a bit problematic for the user because for someone that doesn't keep in mind how the maps work and that you have like a projection and the earth is round, it's weird. It seemed that we were covering more area as we moved away from the equator although this is not true. The user would just get confused with this and then you get to the poles and the rectangles are so long it becomes difficult to use. And so we changed our approach to a kind of pixel perfect grid which we call the pixel perfect grid. It's drawn based on pixels of course by converting between coordinates and pixels every time. It's not as geographically accurate as the other one because once you get to near the poles, since the squares are all the same size, they take up a smaller area but from a navigation standpoint this is not a problem because the intersections will be still calculated correctly and so we won't be lying to the user. So this is how it would work. However this adds some problems of its own. It was not performant at all. So as you can see, okay, as I, sorry, this was just playing on the other screen. Interesting. Okay, let me get to the point I want. So here, so here we actually zoomed out of one level and as you can see the squares got smaller and the colors disappeared. Since we had like fixed size squares, as soon as we zoomed out one level, we needed to add more squares to the map and the browser just couldn't cope with this because the number of squares increased exponentially and the browser would just stop working. So we actually had to limit the zoom level and actually disable the coropleth at lower zoom levels. This was a big problem for the user because you would lose context. So at this level you wouldn't know where images were nor where to find them. You had to either zoom in and then pan. You couldn't have like a global view of the system which was also a problem. Okay, the solution for this was to use a zoom independent grid which is what's implemented right now and what we've seen on the browser. As you can see, the number of squares on screen is almost always the same and as you zoom in we have an increase on a ratio of 1 to 4. So each little square becomes four different squares once you zoom in. So you can see that one that says 26. When you zoom in, it will become four different squares. So this constant ratio makes so that the users don't lose context of what's happening and the area that they take up is always the same. So the square doesn't move and the imagery is not lost. This is way faster for the browser to render because the number of squares is lower. It's easy for the user and we can have the coroplet always visible which gives us a very nice world view like this. You can immediately see where imagery is and what's available. Okay, some text stack now. To build all of this, we used Mapbox for the map which will be giving fantastic talks as well, I heard. We have a lot of experience with working with Mapbox. It's a very powerful system. It supports powerful styling and it's very easily extensible. And then we decided to pair it with React. So when we started building this like one year ago or so, React showed a good promise and nice growth and evolution. It's open source, it has a good community and it's very simple to use. At development seed, we also often go and strive for cutting edge technologies and trying new approaches to things. In the past, we had actually tried to pair Mapbox and Angular but it turned out not to be as easy as with React. React's unidirectional data flow actually makes for a very nice integration between the two of them especially because of how it responds to the events and how it has the top to bottom rendering process which was also interesting. Besides all this, we use Travis to keep everything in check, to keep all the tests running and to keep everything deployed all in sync. So I've mentioned in the beginning some of the tools that make up OAM including the catalog and the browser which we've demoed actually. But we have some more, some other ones. So first one would be the uploader. So this is the tool you can use to upload images to the hot OIN node. It's also a form that we developed. It's not fully open to everyone. You need a token to be able to use it but it's super easy to get. You just have to state your intentions with kind of imagery you have and then you're assigned one and you're done. This is just so that we can try to reduce the number of spam images we get. We have like a documentation hub that basically gathers everything, every information about the project, why different tools were built, what they can do and actually also how to run your local copy. It's still, information is still being added to it. It's still in development. So if you browse it, you may find that coming soon message somewhere. And the last but not least, we have the design system. So design system here has enough material for a talk of its own. So I'm going to go over it very briefly. It's basically a set of styles, guidelines and other kinds of shareable code components that we use throughout OAM to ensure visual and behavioral consistency. If someone wants to start a new app or a new tool for OAM, they can just install this design system and they will have a bunch of pre-made tools and pre-made styles that they just use so that everything looks more or less part of the same branding. We have a nice blog post about design system. You can find it on tiny.cc slash design systems on why they're important and why we think they are a good approach to some types of projects. So what does the future look like for OAM? The first thing we want to do is to simplify the contribution process, how people can actually help us. Most of this will be done by simplifying the process of uploading imagery and contributing with imagery to the project. And we also are looking to get more organizations to have OAM nodes. So if any of you guys has images that they can share or they want to share or you can find them somewhere, please consider getting a node or if you can do that, just contribute through the uploader. That also always helps. Another way to get involved is with code. If you're a programmer, you can just contribute with code, help us out. Otherwise, just go over the tools we've built, give us opinions, give us ideas. Everything is open source, everything is available under the OTH OSM organization GitHub. So just check it out and let us know what you think. Okay. Thank you very much. We have some time for additional questions. If anyone wants to know anything about the... Just wondering, you probably have a lot of data in there which might be of interest for other people to ingest it automatically. Do you provide standardized interfaces of your data like CSW or is it like your own standard which is not ingestible by other catalogs automatically? At this moment, the only point of access we have, it's through the catalog, which is that JSON API where you can get information and metadata about all the images that we have. Is that something that you consider like providing the CSW interface because there probably was a community or a community or some people don't want to go to a different interface which is not a business night, but other people might prefer having it in their own catalog. That's interesting. It may be something we eventually consider. So CSW is an OGC standard, OGFACELS program standard, which defines like interface to query metadata from catalogs. For example, your network is one implementation of it and you have all the metadata in there and then you can have several nodes which can itself harvest from other catalogs and then they can be harvested for others because they didn't place loads well behind. By that you don't need to have like this interface there and then probably like an adapter for that interface there to have like that host. Yeah, definitely, definitely maybe something we will implement. Okay, thank you. Yeah, I have one question. The first place we get to get was like something about you have no community or community or community, so you have a person or a community or a community? Yeah, sure, definitely. It would be amazing actually. So I mean, yeah, if possible, would we have this person, the data, the digital license around it? Yeah, yeah, actually, yes. So as I said, you have the two ways to do it. You either host a node if you can and then you could be the owner, let's say, of the Belgium node or if you can't do that, you can just get the imagery and then upload it through the uploader form. But you have to make sure that you can do that and that the imagery can become available under CC by 4. But besides that, yes, it would be nice actually. You can have your own nodes because for example, yeah, that's already there. No, no, no, no. If you have the raw imagery, you can use the uploader, which is this one. Yeah, so this little form, it goes on, but basically you down here, you'd have fields for all the images and then you can just put the links of the images and upload them. And this uploads the hot OIN node, which is our node if you can't have your own. Yes, down there. Overlapping images on the browser. So the browser only allows for the preview of one image at a time. You get the side panel with all the images available for a specific cell and then you can just select whatever you want. There is some filtering allowed by like a time frame last year, last month, last week. But then once you have all the results, you just have to go like one by one, see the metadata and get whatever you want. If you're looking for a more powerful interaction, then the API would be the way to go. Yes, there. I'm not entirely sure. I think GeoTIFF is the best format you can upload the images in. Elevation data, we are not using it at the moment in the browser. We're not showing it, but it would be stored as a property in the database, maybe for future querying processes. Yes, yeah. Right now we are storing everything on Amazon S3 buckets. Sorry again. It's very easy to have an OAN node. You need to have a place which is accessible by everyone, like an S3 bucket or your own server, if you will. And then just follow the instructions on the readme, which is basically add the URLs to your node to the register, to OAN register, and that's it. Then we have a worker that runs, that indexes everything, and as long as your bucket is accessible, then it will find it, no problem. We have a question here. Right now it's English only. We don't have multilingual support for now, but it's something that we are considering for the future. Yeah, sure. All the code is open source. You can find us under hotOSM on GitHub. If you have the skills to do so, please. Anything else? No? Okay. Thank you very much again. You can find me under the... Or if you have more questions or want to talk in private, I'll be around. So just come and search for me. Thank you. Thank you.
Last summer the new OpenAerialMap launched. OpenAerialMap (OAM) is now providing access to open satellite, aerial, and UAV imagery around the world. Users can search through a web-based map browser, conduct geographic queries to an API, upload imagery to publish openly licensed imagery, and process imagery into tile map services. Searching through many sources of imagery in a usable way was one of the biggest challenges we saw when designing the system. We knew usability was going to be critical to the adoption and success of OAM so we created a new type of grid interaction to search and find imagery. This talk will present the design and technical build process for developing the new OAM map browser and the open source tools that power the system. We'll discuss our UX experiments and how they influenced the build process, and talk about how and why we used React JS to build a grid-based imagery browser. The OAM community of open source tools is growing over the next year. We'll also provide a recap of the roadmap for the next year and how anyone can get involved.
10.5446/20417 (DOI)
Hello, welcome. Thanks for coming. My name is Matthew Hansen. I'm with Development Seed in Washington, D.C. And I'm going to talk about a project that we did for the American Red Cross. You're probably all familiar with the Missing Maps project. Does anyone not familiar with the Missing Maps project? Okay. So Missing Maps is a project sponsored by the Red Cross to encourage mapping of areas in need, especially after disasters. So the MissingMaps.org website, we worked on the project to redesign the website. So if you go to that site now, that's a new site from last year. It's actually been up for maybe six months now or so. And our original goal was to not only redesign the website, but Red Cross wanted user pages showing people's statistics and what they've committed into OpenStreetMap, as well as statistics on those commits, and some sort of reward mechanism, and related to that leaderboard showing the ranks of users and groups, and they wanted this to happen in real time. So the Missing Maps project sponsors Mapathons. So Mapathons are where everybody, they gather together for an hour, maybe two hours. It's maybe some people haven't used OpenStreetMap before. And they use the, they perhaps undergo a short training session, and then they have a targeted area that where everybody jumps on. So this is like maybe 70 users, maybe a lot fewer, maybe some more for really large ones, and they map that region. So the real time component is to, so you can show up on a projector, the real time contributions over time during these Mapathons. So this comes down to tracking commits. So that's what we need to do, is we need to take the commits and track them. And a commit in OpenStreetMap is called a change set. And this is made up of metadata and the data. So if you're familiar with OpenStreetMap, you might go here and look at some of the details on a particular change set. And this has metadata and the data included in it. This is the geometry. And this is the metadata that's published every minute. Now I'm going to get into the details of the real time system in a little bit, because there's actually, in order to do this in real time, the geometry isn't actually available with the metadata. And so it's a little bit more complicated. But if you notice in the change set, we have hashtags. So hashtags are how we form communities in missing maps or in fact other projects. When these Mapathons happen or maybe outside of Mapathons, people who make commits can add hashtags to their commits. And then we can track those. So hashtags are spatially unbounded. They track groups and events. You could have, they're the biggest one is the missing maps hashtag that the Red Cross was particularly interested in. But you could put as many hashtags as you want. So for a particular Mapathon or for a particular project, you might have a hashtag and the editor that you use, you can be configured to just automatically add those hashtags every time that you make a commit. So we have map time, my awesome hashtag, whatever it is that you want to add. So this brings us to leaderboards. So with these hashtags in place, we can have leaderboards where we can look at the total commits for any specific hashtag, as well as the users. And so what you see here is this is interactive. And so you can add any specific hashtag that you want. If you go to this page, it will default to the one on the left is the missing maps hashtag. But I've added a hot OSM one and a map time event one. And so you can see that in a Mapathon, let's say you could have two groups, and one group uses one hashtag, another one uses another, they both are using maybe some common one. And you could have some sort of, you know, competition in the map and see who's doing more commits, you know, left side, right side, bald people, those with hair. This is the total number of edits that have been made since maybe in the last six or so months, when we started tracking. We would like to in the future go back and add historical data so we can go back all the way to the beginning of the missing maps project. But right now, this is just since we started. And then the leaderboards show the users. And right now, this is being sorted by the total number of edits, but you can sort by buildings or the kilometers or roads or or the any of the fields here. So you see we have RIVW who is been on the top. This has been if you check every once in a while, it's usually these top five people, they're gonna sort of bouncing around. I don't know if they're actively trying to get on top or but right now RIVW here is the top one. Now if you have made a commit to OpenStreetMap in the last several months, and you have added the missing maps hashtag, then you already have a user page automatically. This doesn't have to be added. It's it's automatically added. So those user pages, each particular user can go to a specific page. And so here we have RIVW. It shows a variety of metrics on his contributions. And total total edits and that sort of stuff, the hashtags that are used, as well as you see these badges. And also we have a contribution timeline and a map showing the regions. Now he's clearly focused right here in around in South Africa. But other people might be the other people actually bounce all over the place. This is a we actually save the convex hull of a commit and then combine that with the previous convex hulls. So we're not storing all of the geometry just in the approximate region. And also the countries will map those those geometries to what countries they're in. So we can we can track the countries that are mapped as well. And now you can look at the badges that RIVW has earned. And there's a variety of them have been, of course, been very active. And this is all original artwork made for this for this project. And with clever names. And this is the illustrations are all done by our Dylan Moriarty and at Development Seam. And down below on that same page, you have your upcoming badges. So and you can see that there's a progress there. So we've got white water rafting that's mapping of waterways. There's there's really a quite a number of badges that you can earn. And here's some examples, a little pixelated, but okay, so why rewards? Some people will be like, Well, that's silly. We originally this project was called OSM gamification. But internally, we didn't really like that term. This sometimes there's a negative connotation to gamification, or maybe it's a buzzword that's been perhaps used too frequently of late. Well, rewards provide a few different things. First off, there's an immersion in the mapping experience, you you you make commits, and it's not just about making the commits. And going into a black hole, and then you can see him amidst all the other commits on OpenStreetMap, you can go to your stats page and you can see exactly what you've done. So it gives the statistics for what you've done, I think are very useful. And most people would be interested in that that people are after different things. Some people might not care about any of these things. Some people might be care might maybe care about a few of these. So you get a sense of achievement when you earn badges. And you're strive to maybe get the next batch. So this increases retention. It also can encourage cooperation. Like I mentioned before, you could have teams, and teams can cooperate in order to perhaps win over the other team during a mapathon. And of course, the competition inherent in that. So how did we do this? We implemented this real time system using largely microservices and there's a diagram here and I'll just talk about each of the pieces. This is all implemented on AWS. So the first thing that we need to do is we need to stream the real time data. So OpenStreetMap makes the metadata available from planet.osm.org and these diff files are published every minute, usually. Every minute there's a new file added and it's all the commits that happened in the last minute. But it doesn't include any of the geometries. So these are available. We could replicate the OpenStreetMap data ourselves, but it changes constantly. So we use the Overpass API. So Overpass essentially replicates the OpenStreetMap database and makes the geometries available for the last minute for all the commits. Well, now these have to be matched up. So we have a node app called PlanetStream and PlanetStream takes in the change set metadata from OSM and the augmented, what is called the augmented diffs from the Overpass API and has a Redis instance running and puts them in the Redis instance because sometimes these don't match up. You can't just take the change sets for one particular minute and the geometry from the same minute because there might be a delay for a variety of reasons. So we put these on a Redis instance and have a timeout, I think, of maybe an hour or more, maybe it's a couple hours. And we match up the metadata IDs with each other. And so we create a final change set with the geometry. Combine change set there. Simultaneously, PlanetStream makes the map data available for the last, sorry, the the geometries available for the last 100 edits just so that if you go to the MissingMaps website, you can you can see a map showing the last 100 commits made and where they are. And also it keeps track of the trending hashtags. So again, at MissingMaps, if you go in and want to add a hashtag, you can see a list of what's been popular recently. So now we need to calculate the user metrics. Now that we have these combined change sets. So we the repo is called OMStatsWorkers that we use and we use AWS Lambda functions and Kinesis Streams. And I believe that, yes, here we go. So here's the diagram, you see the combined change set goes into a Amazon Kinesis Stream, which is it's just it's a Q, you add it to the Q. And then as change sets are added to the Q, that fires off a Lambda function. Lambda function, if you're not familiar with it is it's a serverless. So this is all a serverless setup. So we have a node app and that is uploaded to as a Lambda function and we don't have to worry about running servers or anything like that. And they're invoked every time a change set is added to the stream. And it scales automatically. So if there's a lot in the stream, then it'll fire off a lot of Lambda functions. And it works very well. We use a RDS database to store these metrics. So the Lambda function calculates some metrics on the change set and adds that to the database. Oh, yes. And these are the types of things that are calculated, right? We have the metrics here, but also some geometry calculations to figure out what country things are in, getting the convex hull of those geometries and adding it to the user's total contributions geometries. Okay, so why Lambda? The mapathons, right, are not happening all the time. There are a lot of times there's no activity at all for a particular hashtag or any commits to OpenStreetMap. And then during a mapathon, that can spike and get really high. So we didn't want to run an EC2 instance all the time. So Lambda functions are perfect because Lambda functions, you don't pay for them when they're not doing anything, if they're just sitting there. So you can upload a Lambda function and it doesn't cost you anything at all. And that's very nice. It only costs for how long it runs and how many times it runs. So here you see the invocations and the number of times that the Lambda function is called over some time period here. And it can vary from zero or maybe a couple per minute up to 100 per minute. And the Lambda functions, therefore, provide a very cheap way to do this. I will add that our cost for the Lambda functions, for this whole project, is essentially zero. We're not running a million requests. If you have something that could be a serverless setup, I would encourage you to really look at Lambda functions because they're actually very, very cheap. They cost fractions of a cent per time they run, depending on the memory usage that you configure. They're a very cost-effective way to do things if it makes sense. So, contribute. If you go to the missing maps page, there's a list of mapathons that are coming up. So this is the current mapathons coming up. So if you happen to be in the Czech Republic or Belgium later in the next few weeks, and there might be others in your area. Here's the overall contributions for missing maps since we started doing this. And, of course, I should point out that this is all open source, and you can go to the American Red Cross GitHub page here and the OSM stats repo. Now, these are multiple services, so there's multiple repos, but if you go to OSM stats, the readme file lists all the repositories that we use for this. And that's it. Thank you. Okay, thank you very much, Matthew. So we have time for a few questions. I'm curious what Red Cross's reaction has been to the project. Are they getting the kind of outcomes they were looking for? Yeah, good question. Well, overall, yes, there's been a variety of technical difficulties. I think that early on in the process, we didn't quite realize some of the technical issues that we would have in trying to do this in real time. Specifically, issues with overpass, sometimes going down and that delay and dropped commits. So we have had, since we started, we have had periods of time where we've lost commits. And so that's why one of the reasons why we want to do historical processing is not only to go back to the beginning of time of missing maps, but also to fill in these gaps. So, but other than these technical glitches, I think it was perhaps maybe ambitious to think that we could reliably have a 100% up all the time service that would always run. So this gap filling, periodic gap filling going back to achieve 100% inclusion is what we feel is important now. Is it working for things like retention and building user pool? Yeah, it seems to. And if you go to the leader pages, you see that some people are very active and they have a lot of badges. And it's really pretty cool. And especially during mapathons, it's neat to have it up on a screen and watch the real time commits coming up. Like, you make a commit and then you can go to your page and it comes within a minute or two. Usually, sometimes longer, you can see that coming in. Another question? Yeah, thank you for your talk. A bit of a superficial one. The architectural diagrams in your slides were super pretty. I was just wondering how you generated those? I'll have to get back to you on that. I can't remember if it was the... I don't know. Shout out a tweet to me and ask. I can't quite remember how that was done. Mark, Farrah is the one who did that. And we've used a couple things to make those. Amazon has their own architecture diagram, but we didn't use that. Give me a tweet. I will respond and let you know. Okay, then thanks again. Thank you.
Mapathons are an increasingly effective way to get data into OpenStreetMap. The Missing Maps project hosts mapathons to increase the amount of data in areas that don't have large local OSM communities. The American Red Cross and Development Seed have built an analytics platform that tracks user trends in real-time and rewards contributors for their efforts, as can be seen at missingmaps.org OSM-stats tracks user's activity, consistency and relative reputation, reporting detailed metrics and awarding a variety of themed badges based on the type and magnitude of contributions. Badges range from simple tasks ("Add 4 roads") to challenging ("Map in 10 countries"). Leaderboard pages display up to date detail on the most active users for a current project, while hashtag groupings display statistics to be separated out, allowing tracking of groups. A map of each users commits can be seen, as can a map view indicating the last 100 changes. Most of the contributions for the Missing Maps project occur during mapathons where hundreds of volunteers submit edits and additions over a couple of hours. This means that the system needs to handle large spikes of activity when thousands of edits are added. We deployed the OSM-stats components using AWS Lambda functions and Kinesis streams. These scale very well to meet the needs of Mapathons and incur minimal cost when not in use.
10.5446/20416 (DOI)
Good afternoon, ladies and gentlemen. My name is Dave Curry with GeoAnalytic. I'll be your host and timekeeper today. First talk today, I'm going to introduce Marco Deuker from SkyGeo who's going to talk about high resolution deformation maps with high performance and extensive processing. That's right. It's a mistake already. Okay, well, good afternoon. Thank you for being here at the end of this long day. Today I'd like to share with you my experience with serving up the high resolution deformation maps we're so proud of at SkyGeo. My name is Marco Deuker. At SkyGeo I'm responsible for delivering actually the maps in the user portal to the end users. SkyGeo is a company who makes living out of monitoring infrastructure with satellites, actually radar satellites. It's organized as a sort of startup, so we like to move fast. I'll start my presentation with explaining the nice data products we make, then how it looks when we deliver it, and sort of a field trip or trip report. Just share the experiences with you and the do's and don'ts. First the principle of the INSAR technique as it is called. INSAR is a really simple concept actually. The satellite comes over every four or eleven days or something. Then it does its acquisitions. The interesting part is that it can follow a point in time by its spectral signature. So there's somewhere a reflector there on that house, and by the spectral signature of the reflector we can follow it in time. We don't know exactly where the point is, that's interesting, but we can follow it in time. If we do follow it in time. Okay, what we use to see if a point moves towards or away from the satellite, actually not the amount of wavelengths from the satellite to the point, but we look at the phase difference. That makes it possible to measure with a very high accuracy, but it's also very difficult because we cannot clearly see if something has moved a bit towards the satellite, or a bit in a wavelength, or a bit in two wavelengths, or a bit in three wavelengths. We don't really know that without introducing additional information. That additional information can come from nearby points, preferably known to be stable, or by careful analysis of the time series, but we do need to introduce some extra data. The resulting data properties are I think very interesting. In the end we measure movements with millimeter precision. So it's very precise. We have an interval of four to 35 days, depending on the satellite we choose. The interesting part is that we have data available from 1992 onwards. So today you can decide to start monitoring something from 1992 onwards. That's always very interesting. We always start too late with monitoring programs. So finally there's a technique that today you can decide to start monitoring yesterday. And we do need heart and reflecting surfaces to get some signal. And we measure in the direction of the satellite, but we can do some smart decomposing so that we can actually measure horizontal movements only in east and west direction and vertical movements. Well, a typical map we deliver to our customers look like this. This is a part of the Port of Rotterdam. It's about three by four kilometers, this green shield. And you see a lot of green and red points. Green points are more or less stable. Red points are subsiding. And if you then click one of these points, you see the time series with measurements. And that time series shows that it goes down clearly. We have a lot in the Netherlands actually. Well, where do we apply that kind of data, what do our customers do with it? And one of the more interesting parts is actually injection monitoring. If you are deploying an oil field, getting oil out, the soil will subside or even fall if it goes very fast. And that might give you trouble with the neighbors, for example, or something else. And then you can inject water so that the soil more or less keeps at the same level. If you inject too much water, you actually push all sorts of rubbish to the surface. You don't want that. If you do too slowly, you get a lot of shear stress in the bottom and you can get something we call a well casing damage. So that well casing is very expensive. The well casing going down costs about a million to exchange it. Since our customers using our data, they didn't have one well casing damage anymore. So they spent, that really is a very good business case. Very interesting. What's also interesting is it's used for gas pipeline monitoring. In Netherlands, houses are typically built on foundations, but the gas pipes going through the houses not. Well, the soil is subsiding and the gas pipe can have only that much strain before it breaks off. And then you have a gas leak. That's a bit of a problem. It's always difficult to find gas leaks before it explodes. And it's always nice to actually go digging at the right place. Well, we deliver this kind of map so that the guys who have to go digging and find the gas leaks actually know where to start digging and have the highest chance of finding it. Really saves a lot of money. And again, an example from the Netherlands, maintenance planning for the municipality. Roads tend to sink in the Netherlands. And then every now and again, you have to put up something because we do want to have the roads above the water level. At least 30 centimeters in most places. And well, with these kind of maps, they can actually predict when the road will subside to below the 30 centimeters above ground water level and then they have to do something. So they do better planning on the basis of this kind of data. That's interesting. Excuse me. Not working. We're back. Okay, sorry. Next slide, it will still work. There's some error here. Nothing left. Okay. Okay, we're back. Okay, delivering this data to the customer. What we do, we do. We deliver mapping services, WMS. We deliver data services, WFS, and we deliver processing services, WPS. We also deliver data. They always ask for data. Shapefile doesn't work because they're way too much points. So we deliver the CSV files and the typical customer then discovers that you cannot render that kind of files in a typical GIS. So we also provide our customers viewers with data and meaningful tools. One of these meaningful tools is sliders to actually slide to dial in properties so that you can actually only see the most interesting points on your map. You see that happening here in this little video. Performance actually quite low. Just because I was doing this on a very slow Wi-Fi at the campsite. Okay, that is done using a hidden gem in the WMS specification, the WMS dimensions, well known from elevation or from time dimensions, but it's a generic principle so you can use it for just any kind of filtering actually. So it's actually also advertised in the capabilities document. So you can have a filter, for example, if you add this to your query, if you do get map request, then you will only see points within that range. Very useful for our clients to filter on the data. We also supply for some customers a time slider so that they actually can zoom in on a certain period in time where there has something interesting happened. Well, you see it happening here on the background. That was actually very easy to do with map server as we use it, with runtime substitution for the ones who like that. The support in the WMS standard is not that great. We have to do that with vendor specific parameters as we call that. It also is important for our customers is that they can apply certain styles so they get maximum insight in the structure of the data. And well, they have to select point sizes and they have to stretch the colors over the values and so on. And in WMS, that tends to be getting quite cumbersome because you need a lot of styles to supply for the customer so that they can actually see the data well. It's not very satisfactory so we find a different solution. And we actually now have on our viewers two sliders, just one setting for the point size and a slider to stretch the colors over the values, just as we would like it. But it was not very easy to implement. Actually it was quite a struggle and I hope to in the end of the presentation I can tell a little bit more on how we did it. But the principle is just slide, stretch the colors over the values and then certainly you see very clearly, well in this case you see less clearly what's happening. Okay. Also important of course is query the data set for the customer. Here you see a query happening, a special query just selecting a few points and that gives you a point set where you can walk through and see all the graphs and things. Of course you can also query for attributes so that you can actually select all points which move faster than a few millimeters per year or something. That's just a simple plain WFS, not that interesting. But we did find that if you use WFS as a download service, which actually not really meant for that, if you do that it tends to be quite slow in the end. Even map service seven which is much better at doing WFS, it's still too slow so it's handy to have something ready for a user if you want to download the entire data set. Really making noise this, maybe I should hold it I think. Okay. What we also provide for customers is the ability to actually process the data. What you see happening here is that the user is going to combine all those color dots, the deformation measurements with the building so that you actually can see which buildings are going down, which buildings are more stable. That works really well. Just right from the interface they don't need any GIS and it doesn't take long. I didn't fasten this movie or something, it's just normal speed. Here you see that these two buildings are definitely not stable and most of the other ones are stable there and for part you have one which is orange which is also not so very stable. Okay the processing data, we use the WPS standard for that and that standard defines a way to send requests for processing and also a way to send the response back to the client. We use PyWPS to implement it and PyWPS allows you to write just about anything in Python for the processing. You're not bound to what you serve with WMS, WFS or whatever, you can just do just about anything you like to program in Python. The queuing and killing of tasks, you have to write yourself. That's not so nice. It's very easy to extend our hacks so for people liking that, go ahead. Okay, well this is about the more interesting parts we had to put in the viewer so that left us when we wanted to build it with the requirements for the portal. Of course the normal mapping functionality and the more unusual ones I showed just now that had to be in there. But we want to have it fast because we need to render everything live. We can do the usual tiling and caching thing. Most people do when you need fast map because we have to do that live filtering. It needs to be reliable. Has to be there every day. Customers are in the entire world so we cannot take things down. We tend to tinker a lot. Customers have a lot of requests and most of the time we want to react on that within a week. New features we bring into production within a week. That means tinkering in a production environment and you don't want to throw things over for other customers then as well. So that's an interesting requirement. Of course it has to be secure. You don't want to, for example, the oil drilling, the pre-drilling sensor on. You don't want to be public. That's very sensitive information. It needs to be scalable of course because we want a lot of customers and the flexibility I talked about. And the last thing of course it needs to be standards compliant so that every customer can actually use it in its own GIS. This is the architecture we used. We wanted something case-proof. Keep it simple, stupid. We wanted simple architecture which wouldn't let us down. So we went back to as much as proven technology as possible. Well, that means that actually now looking at the diagram we wrote just wrong. At the first day I started there designing this software with this entire portal and it's still there. I ended up a bit for this presentation but that's about what we made and it's still there. Well, in the lower part here you see the processing, the role satellite data. That's not what I'm talking about. Here it starts at post-GIS. There's all the data in. And then we provide services with a web server, Google web server and the TYWPS. And on top of that we use NGINX as the web server for an old authentication is happening there. And NGINX also serves the static content and some server side-scrubbing. Interesting part is that we made almost everything file-based except for what is in post-GIS. So here you see what we store for two customers, the customer demo and the customer intern. And the customer demo in this case has a viewer called Rotterdam and the viewer has a download section, services section and a viewer section. It's all file-based, if we want to take a customer offline we just delete the folder and he's gone. Well, if you look at services, there you have the well-known map files of map server and you have the files defining a JavaScript viewer and we can tinker with that for every customer separately if we want to. Everything is sim linked and so on. So if we want to do something for all customers at once, we just touch the code base and it's done. Okay, now I come a bit at the interactive part. How much time do I still have left? Three minutes? I didn't see any signs. Did I miss them? Ah, sorry about that. Okay, then I have very little time left actually. So from this entire menu I'm afraid I have to skip most subjects. Almost all of them I think. I think the conclusions then are the best and if there are any questions then I might grab a slide of the last part. So fast forward to the conclusions. We skip all the nice pictures. The finish is always running into a breakable. What we found during this journey is actually that our OGC standards, the good old WMS, WFS, WPS come a very long way in creating a rich web application with custom tools and things and so on that you can have really specialized software for customers. We do have customers who say we want a GIS like that. Okay, the live rendering. Well, we do assist it with a little bit of smart caching by NGINX. Really comes a long way in serving up high-performance maps, giving a high-performance experience while live rendering. So everybody thinking you always need tiling and caching. If you like really work carefully and have performance in mind you can get performance in mind maps without tiling and caching. Another nice conclusion which I cannot back up with, still too much, which I cannot back up with the slides, but we use Python Maps Script to turn Mapserver into a WSGI application and it proves to be very flexible. Sorry about that. Very flexible. And then you have one place where request come in and responses go out. So that gives you the ability, for example, to edit the capabilities document on the fly when it's going out. So then you can much more customize the way your application is behaving. So when you run into limitations of Mapserver then you can fix it in your own Python scripting. It's really, really flexible and it's actually much easier to deploy Mapserver if you put it, make it the WSGI application than standard CGI application. And well, for the ones of you never working with Docker, Docker is really sort of Swiss Army knife for all deployment problems. If you're using Docker you really isolate applications into sort of container, manageable unit and that really helps us to try out new versions of software to deploy the software on a different server without a lot of hassle and it really enables us to move fast and break not too much on the way. Thank you very much. All right, have we got any questions? Hi, thanks for your presentation. Regarding the interactive viewer, have you thought about using vector data to streaming to the client and then do the scaling, the coloring on client side? Yes, yes. We actually did quite some testing on, well, will we do things on the server side or will we do it on the client side? Of course, if you do things client side it often very snappy and you have more possibilities for interacting. Well, two things are posed. One is that we lose a lot of standard compliance. So the customer then really has to use our viewer and can rely on the services that much. That's the one thing. The other thing is there's no way that you will render about a million objects in a browser. A browser doesn't care about rendering a polygon with a million vertices. That's no problem. But rendering a million separate points is really a problem that doesn't perform. So we had to discard that, unfortunately. Any other questions? I will ask the question about something different actually than you actually showed us that. What kind of tools are you using for the radar in ferometry? Good you ask. I forgot to tell actually. We wrote a lot of proprietary software to do that. And for the Sentinel-1 satellite, the one out of Copernicus, we partly used the Sentinel-1 toolbox and the rest is all written by ourselves. And it's unfortunately not open source. I have no say in that. A very similar question. What kind of components were you using for the interactive viewer? Well, we're using a product called Heron. I think Yusuf and the Broeker know something about that. And that's actually XJS, GeoX, OpenLayers, JavaScript, Fior. It's a nice framework. You get a lot for free. But we do have to write a lot of adaptions as well. But it really works. Anyone else? I actually have a question. You're storing your results as a CSV file that you're delivering directly from Maps Server. Did you compare the performance of something like that with say using post-gist at the back end? No, the CSV file is just for download for the customer. So it's not even going through a Maps Server. The CSV files are actually the CSV files we import into the database. That's the exact same CSV file as we provide for download for the customer, just as a convenience. So the result of your processing is a CSV file, but then you load it into the database? Yeah, well, actually the result of processing is something else. But we then export the CSV file and that one is imported into the database. The crux is, of course, you need a spatial index. And without something like post-gist or a shapefile or something, you don't have a spatial index. And then you're really lost for speed. Any other questions? No? Well, thank you very much, Marco. That was an excellent talk. Thank you.
SkyGeo uses Interferometric Synthetic Aperture Radar (InSAR) by satellites for mapping ground and infrastructure deformation. This leads to maps with millions of virtual sensors, each measuring deformation by time series containing hundreds of measurements. Examples of monitoring infrastructure and managing water injection in oil fields are shown. The deformation maps and maps with derived information are delivered via a customer portal. The portal tries to provide the rather complex data derived from InSAR together with extensive features to investigate, analyze and further process the data in a user friendly way. As customers are free to use any GIS package as well as the portals own viewer, all functionality is delivered by fully leveraging the (hidden) potential of the open standards WMS, WFS and WPS. Building the portal proved challenging because of the sheer amount of data combined by the need of live rendering to allow for styling by users and dynamic filtering using WMS dimensions. On top of that, the portal must be alive 24/7, be very secure and required new functionality must be in production within 2-4 weeks. The portal should allow a growth of 10 times per year. How these requirements can be met using Docker, Nginx, Mapserver, Heron-MC, PostGIS and PyWPS and some custom components will be discussed. Special attention is given to the rich feature set while retaining standards compliance and the encapsulation of mapserver for on the fly mapfile building and easy management of a very large amount of layers.
10.5446/20415 (DOI)
Good afternoon. Guten Tag. How's everyone today? Good? Good. Good. We're going to start today off with the session for crowdsourcing. And our first speaker today is Jop. Jop joins us from the Dutch Ministry of Infrastructure and Environment. He is currently the product owner of the Map Feedback System. And also, he is in tandem working on his PhD. So I'll go over to you, Jop. Thank you. Yes, that's correct. I'm from the Dutch Ministry of Infrastructure and Environment. And well, you also can see our three B things behind it because that are Dutch datasets. I will explain a little about that context because it's a bit boring. So don't mind at all. But well, to make the public government more efficient, there are several key registrations. And there are registrations like, are you dead? Where are you living? Stuff like that, to know that. And also some key registrations which have a geo component, which are for the cadastries of an obvious one, where is your house, where is your land rights, but also the addresses and two topographical key registrations. Well, the ministry is one of the, well, chiefs of those registrations because they have to be in good quality. But also the fun part is that they all are open data, except from a small part of the cadastral data, but cadastral map and how big are your parcels and stuff like that. It's all open data. Another interesting thing is that there are centrally organized key registrations with one data provider, but also several with kind of decentralized system, which many local governments, professors and stuff like that, we're all, well, getting the data right. Open data, I think it's a really, really good thing that there is, but also some flaws in it. And one flaw is, well, this straight line. As you can see, this is how it's invented. So the government puts data on the web. Companies will use it. They'll make products and services of it. And civilians will use those products and services. And that works. And with the money generated, it flows back to the Texas and that will give the government more money to spend on map products. It works from an economic perspective, but not really good from a quality perspective. Because what you can see is that, well, several government organizations putting all their data on the web, companies will use it, mix it, measure it and give it to the users. So that works. And users will give feedback to the companies on those products, but they won't give feedback to the governments. And that's not their fault, but it's the fault of the government itself because they don't facilitate it. They only put it on the web. So my goal is not only to make more interaction with data, to improve the quality also of the governmental data, but to make more interaction with it. And not only between civilians and governments, companies and governments, but also between governments itself. Because when you compare open data sets, you can also see that they aren't really consistent. So these are the current open data challenges in the Netherlands that I see. So make it more user and developer friendly, make it more consistent. And but also make it more open for interaction and feedback. And that's why my talk is about feedback. Another funny thing in the Netherlands is that it's legally, there's a law which says for every key registration, there should be an open feedback system. There are even by law users identified, all governments have to use the data of a key registration. And when they doubt the quality, they have to report. Another funny thing is there isn't any punishment or control. So there is a law, but there is not really anyone who is seeing that they are really doing it. And the amount of feedback reports also really disappointing. Well, you can ask why? Well, I show you one of the feedback systems of the addresses. You have to fill in this, this and this. You have to fill in a lot of stuff. So it's like this is very bureaucratic. And also we have to know the object that these, the data provider. So you have to know which municipality is creating the data. And that's what we do as well. There wasn't a thing that really works, especially when you have lots of feedback. So what did the BRT, that's one of the top graphical key registrations, did a pilot. And in the pilot, they opened it up for everyone. So not only governments had to give the feedback, but everyone could give the feedback. There were no formalities, so no bureaucratic shit. And also used to use it very accessible. And well, normally they get 10 to 20 reports every year, sometimes 10, sometimes 20 when they're lucky. And with this system, they got in two months of piloting, well, more than three hundred. So it was a huge success. So there was the lessons learned from that, so it's maybe a bit silly, but it wasn't really in the mind of the government. So when you want to have feedback, you have to also make it very easy for everyone, especially when you have open data. It should be easy for everyone because you don't know who will use your data. So you also don't know where your feedback will come from. So you have to make it accessible and use friendly. So for another key registration, we took this lesson and we created a new system. And the project goal was to make you use friendly and use those lessons learned, I told you. One question could be why don't you use the system of the BRC, which was already a success? Well, one thing it was proprietary and we would use it for the decentralized key registration. So four hundred data providers should work with this. So you can't say use ArcGIS online and then you can get all the feedback reports. So we made our own open source system. It was created by the Dutch cadester. And well, what also I think a key component was is the project approach. So we use scrum development. I'm not sure if it says, I think it says a lot for some, some don't really understand it. But one of the key concepts of it's also agile is to make the software work, do it in small packages, go for a minimal viable products. So that we did. We made the simplest and easiest thing to use. And without any buttons and stuff like that. And we also try to improve it every sprint it's called and make it better. Our first release was on the sixteenth of June, as you can see. And well, it's so it's now around two months live. So important design principles. I will show it so I will not talk too long about this. But it's the openness was really a key thing. So, so when you sent in the form I showed you earlier, you don't know what will happen with it. You don't know if they will improve the map or they don't approve the map. So there's no, no response. You can't see what's happening. And you also can't see what what what municipalities are doing for others. So you can't really trust the system. So with this system, one very important thing was that you can track and trace your feedback. And it's also not very revolutionary, but it's revolutionary for the government. I also will go through this quickly. But how does it work? Very simple also. You have a user who puts his feedback in the system. We're pointing on a map, its system will record it, put it in the database and push it through to another key registry holder system. In that system, the key registry holders can can give an update, give feedback to the report. And they send it back to the system automatically so the users can read it. And they also get email notifications about the status of their system. You also see four four letters on that side. PDOK. That's the Dutch natural STI. And we are planning to to connect to that. So you can also use the reports as a web service. This is how it now looks like. And I will jump to it in a demo. So now we are here. As you can see, there are lots of points. Well, this is what happened in two months time. So there are over 900 reports in there. So we are even broke the record, which was 300. And so that's that's pretty cool. And I didn't expect that at all because I thought, well, if you get 100 of those in one month, I'm very lucky. But it's a huge success and that's very cool to see. I will go to a point in the Netherlands where I know there is a mistake. It's at the university thing. And what you what you now notice there is now the map is gray. And there is also here a gray button is because this is only for the for the large scale data set. And you can see it's very visible when you're at the scale of one at 5000. So that's that's that's the thing. We are also building it for the other topographical that set, which will use this exactly the same interface and website. And should is already then visible on the zoom level, but for the large scale, we have to zoom in. And well, I know in the Netherlands is of course very famous for cycling paths. But this cycling path is closed now. So I will tell the municipality there that the cycling path is gone. I will do it in Dutch, of course, because otherwise he doesn't understand. So this is three words. That's all people have to type in the minimum amount is five characters, but all three words will do the job mostly. You can put in your email, there's even that isn't obligatory to do. But I always want to do that. So well, well, well, maybe I don't listen. It doesn't take time. So I just have to push the same button. And I see a little marker here. You can open the marker and well, it says what I talk about that the cycle path is gone. Well, that's that's all. Simple as that. Well, what you this one. Well, what the municipality will, well, if I put my email address, I could automatically an email. And what the government will get in their own system is a list of reports and they can update it. And when they update it, they also the update will be in the message. So I also get an email from what that was. Well, what I said earlier, it was a huge success from the early days. You can see that the numbers are scrambled, but that's really funny. But the well, what the Oathons are about, so around 1000 is this graph and then the others will get five to 10 a day on daily base still get feedback. So that's really surprised us. And you can it's well, it is anonymous, but we can see from from from some email addresses that you that's are really individuals, governments and companies, but also open treatment users are really using it to improve the map. It's too soon to say something about the quality of the feedback reports because the governments have six months to react on one. So that's a bit silly, but it's also in the law. But what we can see from from the governments who are doing the job goods is that the large part can be approved also a bit higher of the clients than in the pilot we did with the other key registration. But that is because what I saw people are it's so easy that easy that when you do test and he sent it that you also will accidentally make report. So that's that's maybe something to do. Well, some some learning effects have to come up. What we do will do in the future with this is that the application will be further developed for also the older key registrations. And also we will improve the user's friendliness because what you saw it, I had to zoom in, I couldn't type in a search box and I also couldn't upload the picture or stuff like that. That's also the current development to make it first make it work easy. And then we improve things step by step. And also interesting and there's also a lot of a lot of demand for is an API so that the developers can connect to it and also use it to put directly reports on the map because that's real use friendliness that you don't have to go to a website but go directly making a report. And overall long term goal is to make it more generic more flexible so that in time every open data user provider can put stuff in it and we'll get a feedback system. Another spin off is also quite cool is the school crowd project. It's from the Dutch disaster has started. Unfortunately, I don't know the details about it, but the customer was another speaker who actually also show up but unfortunately couldn't come anymore. And the fun thing is that they go to primary schools and give them an objective to locate all police stations stuff like that, look at your own school. And the kids will put those on the map and well, the goals are of course to make them use the geo information, make them more aware of the local environment, but also the cadastres really interested to see what will pop up and how they can use the data to improve their top graphical products. Because one flaw of crowdsourcing is mostly that this, for example, also if you may see a street map, see that the cities are very good quality, but when you go in to go to less densely populated areas, the quality can be much less. And because there are less people will pass by and report things and make it better. But also one of the things is when we activate the school kids to improve the map, we also have a better national coverage and also a renewable source which will go on for years for improvements, which is also quite funny. And also a good thing is that I like the legend, but the kids really liked the module. It's of course better than doing homework, making maps better. Well, if you want to test the application yourself, you can go to URL. I also put the URL for acceptance environment beneath this. So there you can put in your test messages and stuff like that. So you aren't bullying local governments in the Netherlands. And for the rest of all, thank you for your attention. And I'm curious if there are any questions and discussion. Have you received any negative feedback or criticism that it's dangerous to let the crowd provide perspective? Okay, I have to give you a microphone. Have you received any criticism that it's dangerous to let the crowd provide data or the crowd provide perspective on the government's data? Yeah, well, thank you for your question. That's a fun thing. I didn't start it with it, but why we achieved all of this is actually because I did my master's thesis and I had the hypothesis that the governments are against crowdsourcing, that they are conservative, they don't want to use it, and they think it's dangerous. But actually, the researchers did prove that they are not very, let's say, they say, well, yes, we should use it directly. But they were, well, it showed that it was very feasible. And I also also questioned, did civilians put directly objects on the map that they didn't want? And if it's a feedback system like this, they really wanted to have and they were really, well, waiting for it for a government organization to develop it for them. So I think there are still, perhaps there is a group of foreign government organizations who don't really like the concept, but I think there was a lot of acceptance of it. Yes. Thank you. All right, thanks. Hello. Two questions, actually. Did you make use of some open or not open standards on the implementation? Yeah. Yeah, I'm not a developer, so I skipped the open components part. But no, it's mainly open components. The communication works via REST APIs. The backend is developed with Java, the frontend with open layers, Angular and stuff like that. But I mean, standards on how the feedback is being handled. Okay. No, we didn't use any standards of it. No, no, no. So that's our own standards, which is, of course, new mod rest, textbooks and XI coordinates. Okay. So then the other one, will the feedback provider or the citizen get visibility on how his feedback is being handled in the system? Yeah, yeah. If you saw the application, there are several colors. So if you can see here, there are several status of reports. So it's between it's approved and with yellow, it's new, like this one. So if you can see, there are some are more declined, like this one, and some are green, like this one. So you can all read it and also read it of other people. And also, it's also a learning effect that when you see this one is wrong, what's wrong about it, oh, then I shouldn't put it there. So that's, yeah, that's all open and transparent. Hi. I was wondering how long will you keep up the markers? If there's about 20,000 feedback moments at what time will you stop showing them? Yeah, that's a good question. Actually we thought, well, maybe when we are lucky, 100 reports every month. So then after six months, we can clean up the mess and then. But now we are, yeah, well, there are so many markers. So now we are thinking, well, maybe a month or like putting a maximum amount. We always show the open ones, but then the closed ones or the declined ones, we can put them away. And I think a month or so is now, yeah. More questions? No? No. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.
Open data is becoming more and more available, but is mostly designed as a one-way street. Data will flow from the governments to the data users, but the information flow from users to the government is either overlooked or ill-constructed. In the case of the Dutch geographical key registries (BGT/BRT/BAG) the ambition is to get both data flows right, by embracing concepts as volunteered geographic information (VGI) and user-centred scrum development. This talk will show the 'improve the map'-application which is developed by the Dutch Cadastre, including its concepts and theories behind it, its current looks and experiences and its bright future. The application is created entirely from opensource components. Moreover, more Dutch open geodatasets are currently interested in using the application which will bring data providers and data users closer together. Our ambition is that every open dataset deserves its easy-to-use feedback application which is open to everyone. And with everyone we really mean everyone: even primary school kids will help us to keep our maps of good quality.
10.5446/20414 (DOI)
Good afternoon, everybody. Welcome to the afternoon session. My name is Anthony Scott. I'm chairing these three talks. So without further ado, I'd like to hand over to some developers from the app server team, Thomas and Daniel. Hi, everyone. Thank you for coming. So this talk is going to be about status reports about the app server, what's been happening in the past year and the past couple of years before that. I'll give a short overview of the project statistics, then go over some of the main new features we introduced with our 7.0 release last year. Then some insights about what's coming in for the next major version. And Daniel will take over then and give us an overview of the existing client applications that gravitate around the app server and help you to do other things in text editing map files. So just a brief history for context. The app server is getting rather old. So 1994 was the birth of it. And in the years after that, many known people come to the project, 95 franc. The first official release was in 1997. Paul joined and then added post-release support in 2001. There were the first user conferences in the beginning of the 2000s. And in 2006, the creation of OSGO itself, the app server was one of the founding projects. We've been presenting in all the major phosphor Gs that's happened since 2006. We also meet up as a development team to sprint for one week, once every year. So we've been to Toronto, New York. Last one was Paris in this winter. We're in the planning phases for the next one to happen in 2017. For more recent history, we had our previous major release in 2011. Some intermediate releases, once every one or two years, depending on the speed of development. Our last major release was version 7.0 in July last year. And since then, we had maintenance releases at the beginning of this year and another one to come soon. Concerning the statistics of the project, so it's 200,000 lines of code, 13,000 commits, roughly 120 contributors in total. Mailing lists which are supplied to 1,800 people for the user list, which is user-oriented. And 400 people on the dev list, which is more where the architecture discussions go on. Map server is directed by international PSC. We currently have 14 members on it. Nine people from the US and Canada. And five people from Europe. The major developments and changes go through RFC processes. And we are up to 116 RFCs last one with a couple of months ago. Statistics for the past year, the current activity is 350 commits with contributions coming from 30 people. Just a reminder, we are OTC compliant and we are actually certified for WMS 1.3 and WCCS 2.0. WCCS 2.0 is actually the map server, the reference implementation for NEOGC. I suspect that we could be WFS 2.0 certified. It just needs a champion to get the ball rolling and start the process. So last year we released map server 7.0. Maybe something like a year later than we would have wanted to. That happens. Major new features in that 7.0 version were support for UTF grids, support for 2.0 WFS, support for heat maps, styling done via JavaScript and V8 directly from the map file. And a big overhaul of the filtering happening and the syntax used to filter attributes and do geometric filtering. That was the addition of layer level compositing and blending modes and other less prominent stuff. All the refactoring of the text rendering pipeline supports all languages of the world. Remove the ancient GD renderer, bitmap labels, worked on encoding using different GD encoded data sources inside the same map file and there was work on becoming WCCS 2.0 compliant. WFS 2.0, I won't talk much about it because I don't know much about it but basically here it is, we support WFS 2.0 and we should be certified. JavaScript V8 is supporting JavaScript language to create complex style on the run. Basically through JavaScript code analyzing each feature attributes and creating styles on the fly for that attribute, used basically for a very complex symbology. Big thing was rework on the attribute in geometry filtering. Previously for most WFS requests, Mapserver would pull in all of the data for the given bounding box and then do the filtering on its own. Now those filters were applicable, pushed up to the underlying database drivers and so the amount of data transfer between those and Mapserver is much less important and speeds up WFS queries quite a lot. So somewhat simpler to use because all the filtering is done with the Mapserver syntax and translated to postures or Oracle spatial on the fly. Also added support for heat maps, so it's a vector to raster processing vector input source and raster outputs quite configurable depending on the color space you want to interpolate and the scaling of the data you want to use, how each feature is weighted using Mapserver expressions and supporting tile modes for seamless tiling of heat maps. Just a couple of examples, same data sets with different filtering and weighing parameters added. We added layer blending and compositing modes, so this is very useful for people doing health shading and playing rasters on top of vector data. Here's an example where the health shaded raster is applied with a multiply effect which gives much, usually much nicer rendering than default compositing modes. For the text rendering we rely on the half bus library which has become the de facto library for handling international text. The image shows you basically the steps happening for an Arabic example. So the first line is the set of characters coming in one by one and inversing them for right to left parts of the Arabic text. And then the actual shaping which as you see drastically transforms what the text should look like compared to the individual glyphs that are used inside of it. Also have the ability to specify finally which fonts you want to be using for each languages. There's still no real fonts out there covering all the languages of the world, so allowing you to mix those. That came with also quite a nice performance improvement. So it's the time taken by the label cache to do decluttering of labels for a given map. So the new speeds, notice the log scales, so where the new version is at one second, where all the one was a thousand seconds. You shouldn't be using that many labels on your map, but it's nice to have anyway. So this is an example of another complex script for the Tibetan. Now features coming for the upcoming 7.2 release, we don't have an ETA yet. First one is choosing better positions for labels on, I don't know how you see it on the screen here. The left one is the previous version, so you see if you place the labels on sharp road corners you end up with overlapping characters and or labels that wouldn't be rendered because they have too much overlap on them. 7.2 version, we're trying to detect where the sharp, those sharp angles are and trying to offset the labels so those places fall on character breaks in the text for nice outputs. 7.2 also supports layer filters, basically blurring in shadows. So here you see there's an inner glow inside the rivers with a blurring effect on them and shadows you see also around the buildings with basically taking the original shape blurring it, applying some opacity to it, supplying a translation to it. It could also be used to other things to apply color adjustments on the fly to a given layer to do a black and white output for raster layer for example, stuff like that. We're still exploring the possibilities of what additional filters could be added, but that's those that are available for the time being. We also want to support vector tiles, we have an initial implementation that's not integrated into the master branch yet, but more or less ready. Not limited to XYZ addressing and MacaTOP projection, we're reusing the WFS interface to access those tiles from Mapserver which allows you to do stuff like renaming attributes, selecting which layers you want, whatever. With this, we have a few open issues that are waiting for champions to either fund or develop or take over. So we have PHP Maps Script that's not supported with the new PHP 7 releases. Vector tile driver needs a bit of love to become completely integrated with the sufficient configuration options. We'd like to have the JavaScript V8 stuff needs upgrading for the new releases of V8. There's also been work happening on the documentation side. So a few years ago, Tom Agatier switched the translation stuff over to TransFX for crowdsourced translation of the Mapserver text, that's an ongoing task. And in the cold sprint in Paris last winter, we had a team of Mapserver users who took over the task of modernizing the Mapserver tutorial we were previously using. We set up continuous integration of quality assessment, so each commit is tested with the roughly 2,000 functional tests we have on Linux and on Windows. And we had a big, closing big number of tickets thanks to Stefan also, this winter with 700 issues that were closed in just a few hours, very efficient. We decided to close tickets that didn't have an inactivity for more than one year, over the next year, so close bugs going back up to 2,000, I think, 2002 maybe. And now I'll hand the microphone to Daniel who's taking the rest of the parts. Thank you, Thomas. Is my mic, yep, okay, good. So I'm going to talk about two common questions that we've heard over the years from users and that we're going to address here. First one is the request for having a list of service providers and having a place where service providers can actually list themselves. So that's something that we've addressed this year. There's a new service provider page that's been added to the website. There's three categories of providers, core contributors, that's the people who are members of the PSC or actually have commit rights, so those are the people who can actually integrate your stuff into the software. But then there's also contributors who are active people, many of you are in the room, who are able to support other users in the use of Maps Elver, so you could be listed there if you want. And then there's another service provider section which is a place where companies could list themselves if they provide a complementary product as long as it's open source and it's related to Maps Elver. So if you want to add yourself, we made it so that you have to prove that you are kind of technical, so read RFC 116, the instructions are there. If you cannot make sense of it, you may not fit in the provider's directory. Another question that we hear often is, well, where's the GUI? How do I edit those map files? I'm tired of using a text editor for editing map files, so that's been a question that came up over and over over the years. And I mean, there's a GUI, it's called VI or Emacs or Sublim Text. It's been around forever and everybody's got their own flavor of it and it's stable, it's well tested, so what's the problem, right? Well if you look at, it's actually a good thing that Maps Elver didn't come with the GUI because there's actually, when I prepared this, there's about a dozen products, open source products, and I'm not even talking about non-open source ones, that are built on top of Maps Elver or built around Maps Elver and actually make your life easier in using the software. So Maps Elver remains the core engine that renders the map for those products, but then they help you do your job. So let's go over them relatively quickly because I think I have like five minutes left, less than five, it's gonna be five. So I'm not gonna stick strictly to map file editors, but I'm gonna also include in that because there's a gray area between application frameworks and map file editors. There's a gray zone in between, so I've included some of them that are not strictly map file editors in the list. And since it's a list, any list is likely to miss some important players or important items, so don't blame me, don't blame me if I missed your favorite product, just go to the wiki, and this is intentionally small and long so that you don't read it now, go to the wiki, you can list yourself if you have an open source product that's not listed already. So the first one, and they're listed by the way in Alphabet Scala Order, and I started with the ones that are officially OS Geo Project. So Geo Moose is built on top of Maps Elver. It's a framework for building applications. It's an HTML, JavaScript on the client side and PHP using PHP Maps Script on the server side. So that's the first one. It's really a framework for building applications, and it's built on top of Maps Elver. The Zoo Project is not necessarily about editing map files, but it actually provides a web processing service on top of Maps Elver, so it implements the web processing service spec. It provides services, APIs, a bunch of operators, so if you need the web processing service, that's a good one to look at. Chrome app probably wins my personal prize for simplicity. It's a bit of HTML and JavaScript that you just download. You do file open in your web browser, and you're going, nothing to install. Open your map files, and it gives you a hierarchical view of the objects in your map, and then you click on them, and it gives you a dialog to set the parameters, so it's as simple as it gets. EOX server, this one is a bit like Zoo, which implements a web processing service. EOX implements the Earth observation standards for WCS and WMS on top of Maps Elver, so if you need inspire and that kind of stuff, Stefan would be one person to talk about that. Magna Cartel implements the Cartel CSS standard, and it's a converter to turn the Cartel CSS into Maps Elver map files and map nick configurations. It does not come with a GUI, an editor, so the way Oliver calls it, he calls it a BYOE, so it's bring your own editor, but it does provide a viewer. I saw Oliver's presentation, it's actually quite interesting, the Cartel CSS, it's a good mechanism for styling. There's another one called Map File Generator, I don't know too much about it, it was listed by its developer on the website. It's listed as still being under development, but it's also providing mechanisms for reading and editing map files. Map Manager is a desktop application that runs on Windows that allows you to, it provides you a WYSIWYG environment for editing your maps and previewing your changes. Map Mint is a cloud-based service initially which has been released open source recently, so it provides a bunch of, it's more than just a map file editor, it's actually a complete application development framework. I believe it's built on top of the Zoo project as well. So you could have a look if you want a complete application development environment, including building maps. Scribe UI is another option for editing map files, it's really, it's got the map view on one side and your, the map editor on one side and a map view on the other side. Similar to Magna Cartel which has the Cartel CSS method for styling, Scribe has another syntax which is called a Scribe syntax. As of you have seen Cartel CSS, Scribe is kind of the same mechanisms and same concepts, variables, and automated scale management but in a more map-saver-like syntax. It supports both the Scribe syntax and the map-saver-map file syntax. Then there's two export plugins, one in GVSig. So a plugin, if you're using, if you're a user of GVSig, there's an export plugin you can use to publish your project in maps.ver and same with QGIS. It's called the RT maps.ver exporter, look for it in the list of plugins. I was told that it requires mapscript for installing so it's not necessarily easy to install but once you've installed it, it's going to work well to take your QGIS projects and publish them with maps.ver. So that list, we've gone over 11 products that can help you in your work with maps.ver so there are some GUIs. So that's it for my quick walkthrough of those products so we wanted to keep some time to have interaction. We have several of the developers of those products that I just mentioned in the room because I suspected that some of you may have questions like does this product really do that? It's time to ask it. If you want to ask questions, tell us about the developments. It's time for that too or bring up concerns, ideas. Quick question regarding the vector tile support you shortly mentioned. Are you adhering to any standards regarding vector tile? So the mapbox standard or what kind of standard are you using for that? Yeah, it's the mapbox, it's the mapbox type specification except that it's more than just a web mac at all projection. And you also provide the styling based on the styling you have configured with the map server or how do you do that? There's no styling in the vector tiles. So the styling is pushed over and then it's up to your client to do the styling. Okay, thanks. How's the window support in the map server at the moment? I believe the MS4W package has been upgraded recently because it was lagging a little bit. So I'm not the, you know, Jeff McKenna is the guy who's maintaining MS4W so I'm not the right person to answer about it. You might know more about it. Do you use it? It's active releases that are going on right now so yeah, it's very active. And there is also, Jürgen, do you still do builds in the... There was TAM. No, but there's TAM has builds but there were some as part of OSDU. TAM also did the OSDU for W builds. Okay, because there's the OSDU for W builds but I'm not sure how accurate those ones are. And Mike is providing some dockers for those who are on Linux and using dockers. Are they listed on the downloads page, the dockers? I don't know. They should, yeah. We could ask you for making sure they are listed. Thanks Mike. Thank you. And TAMS. TAMS provides also automated builds either for the releases or I think as a daily build chain, building daily releases of MAP server and Google and maybe others, I don't know. Time for one more quick question. No, okay. Thanks very much. Oh, pick up on. Thank you. Any plans on supporting the WFST transactional? With tiny AWS, do we not do transactional? So tiny AWS does the transactional WFST but it's... Some people complain that it's not exactly like the full, it doesn't support everything that a MAP file supports but our answer to the WFST question has been to integrate with tiny AWS a few years ago because that would require too much changes in the core. The core of MAP server is really optimized for rendering and WFST is going both ways and it's not necessarily... It would be technically possible but that's not the ideal thing to do. Do you have something to add on there? No. Okay. Thank you very much guys. Thank you. Thank you.
2015 was a big year for the MapServer project with the release of the 7.0 major version. This presentation highlights the new features included in this version, like WFS 2.0 for Inspire, UTFGrids, or heatmaps, as well as a recap of the main features added in recent releases. It further shows the current and future directions of the project and discusses contribution opportunities for interested developers and users. After the status report of the MapServer project there will be the opportunity for users to interact with members of the MapServer project team in an open question/answer session. Don’t miss this chance to meet and chat face-to-face with members of the MapServer project team!
10.5446/20411 (DOI)
So we start. So welcome everyone to this track, which is basically about sensor weapon enablement, but you could say it's about Internet of Things, about SOS standards. And we have three speakers. And the first speaker is Masumano Kanata Maxi, talking about ISOS, a Python implementation of the SOS protocol. Okay. Thank you. And I will start, and this talk is about sensor observation services and our implementation in Python. The content of the presentation, as you can see in the slide, is I will start with a short introduction on why do we need to monitor. And then I will present our proposed solution, ISOS. And then I will talk about something about this implementation in a production environment, and then I will conclude with the next steps of the projects. Okay. Why do we need to monitor? The first question is because we need to monitor to control and try to understand what is happening in order to try to solve some societal challenges. So to help to solve some issues. This is an example on how different in hydrological dynamics interact with the ecological function and geomorphological functions. All of these are somehow influenced by the behavior of one of the other ecosystems. So being able to monitor in the system help us to understand how the interactions are and how can we predict and manage better the resources, for example. But this is just an example in the environmental issues, but there may be some other fields of application of this type. And how do we better understand the dynamics and the systems by integrating different type of information, really different from vector maps, data, etc. And what we concentrate on and our own fields of observation. And it's very important to understand the data that you collect to take a correct decision. And it is not only about timeless having the right information at the right time, but it's also the capability to access this information and to understand the completeness of this information and the quality of this information. In early warning systems, for instance, having very bad information may lead to incorrect conclusions and may lead also to some very high damages and cost. Well, if you're looking for a solution that we think that we have implemented following this mainstream is simplicity that agrees with Sundar's, which is open, of course, which is powerful and possibly Python for this life Python. And then maybe is source is what you need to represent this data is also you can have a look at the application on the website is source.org. And there is different features and there you can find documentation for user for developers and a demo application online. You can explore things. How do we have used this sensor and how it fits with other standards? Well, this is an example for early warning. There is a data part where features server coverage servers and sensor observation services that together provide the information, the basic data for some analysis, maybe some modeling. And then the output goes to expert people to try to understand what is happening to rise notification maybe automatically or maybe manually. And then this information, the understanding of the situation goes and is translated into action. From SOS standard perspective, you have two different type of users, the consumer and the data producer. The data consumer are human, either human or machine that try to get the data to the right information. The data producer generally are the sensor that collect information and send to the system. The software how is structured is with different pills one wrapped in the other. We have the services with the library and data warehouse and some configuration files. And this is the base to provide SOS compliant XML versions. And then on top of this, we build up some RESTful API to facilitate the creation of a web interface for administration so that user can interact with the user interface. And then also because then you can program some script to automate some operation over the services. What you get now if you use ESOS, you get the standard SOS standard features plus a number of extending features that we have implemented because we work with data managers and we try to respond to their needs. So we have implemented authentication authorization that we will talk later. We have on real time data aggregation support. We support different time zones so you can request the data in your time zone and this automatically translated. We have an active quality index so it means that each measures you put in the service in the database, it has associated the quality index that tell you what is the quality of your data. Then we have integrated MQTT support and this will be also a topic for the next slides. We support virtual procedure that is basically procedures that use data from other sensors to make a sort of processing on the server side. But then from the user perspective, this is just a sensor that is observing something else. For example, we have used in FP7 project where we have a metro station that is observing higher humidity, temperature, wind speed and we use it to provide in real time evapotranspiration. So we just stored the original data but when a user asks for what do you observe, one of the answers will be we are observing evapotranspiration. And we don't actually store the data but when you ask for them, they are automatically converted and what you get is the evapotranspiration. We do support multiple output format. Also in addition to the standard XML, we support also JSON and we do also support the text file, CSV. And I already told about the graphical interface that we have and what we are working on. We are working on our support for big data. We have implemented a notification service but I would say that this is still probably experimental. It's not really been tested. We have implemented some widget API and some source client API. This is basically the additional feature that you can get with SOS, ESOS. I have to say that we do not support all type of information. We do support only information from in-situ fixed point-wise sensor and mobile. So it means one temperature sensor fixed in a position or one temperature sensor on top of a car that is moving. With the new versions, the new release, we do support version two of the standard. This was a big missing from our software in the past also because version two is one of the accepted version from ESPY. So we pass the site test for the current transaction profile for version one and the core and KVP binding for version two with the new release. The new feature is the security. We have implemented different type of configuration that you can apply so that you can make a server fully open. So HTML, so administration interface, RESTful API and standard SOS request. Or you can decide to close everything and having access with the user and password. Or only having the welcome page open so people can see that there is something but then to access actually you need these credentials. Or only open the SOS part but closing the administration part and RESTful. Or open only, let's say, the data consumer functionality of the service but closing all the other transactional operations of the service. Not only we deal with this security configuration but we also deal with the different type of users. We have defined different type of users with different type of access. They are mean that have access to all the SOS features or implementation of the server. Then you have network manager. They are those that can modify everything but only within a single SOS instance. If SOS instance is single you generally apply to a different monitoring network. So there is one person that can deal only with his monitoring network. Then there is a data manager that can modify the measures and etc. And the metadata procedure but cannot create new sensor or modifying administrative things. And then there is the visitor that can only access visual ones. Another feature that we added is this MQTT support to try to go toward big data usage. And of course I don't have to say too much about this topic but SOS may be one of the standard way to machine to machine interoperability. And but low bandwidth is required and so MQTT is smaller in low power usage. So this is the reason why MQTT is taking so much places. And same version too. You can receive observation from MQTT broker. So you can register your sensor to your MQTT broker and get your data directly in your SOS. Or you can publish the data that you get into your SOS to an MQTT broker. So generally this is how it does it work. And QTT you have a temperature, it's published the data, a user subscribe and get the data. This is how it works. With SOS integration this is something different. This is the normal flow with MQTT. With ESOS you can get observation that are inserted in ESOS and then they are pushed to MQTT broker. So this user can get the data from a normal MQTT. And enabling somehow the usage of this data as MQTT. On the other side still you can have a user that can get the data and have access to all the historical time series to perform maybe analysis. So you have also this storage enabled. And as usual we developed the interface so it's quite easy. You just have to set up the connection to your MQTT broker and then everything is done. We have done the code sprint and finally somebody can say we moved from Salesforce to GitHub. And we were lucky and we had three students for Google Summer of Codes. They developed some work on ESOS widget, Android API and the SOS client API. The first one is an application. Basically they build a JavaScript API that allows to easily configure and get a widget to be included in web pages with maps and etc. And JavaScript API. And then the same thing with Java instead of JavaScript. So you can deploy your Android application easily. These API and library envelope what was the restful API. And then some time series visualization using web components to create easily your plot getting data from your ESOS server. What can you trust ESOS if you want to implement in your case study? ESOS we have one implementation which is hydro-metrological network. And these are some number of our in place implementation. We have 150 sensor, 40 years of data, 88 million of observations. And these are some statistics for April months. So we can assure that the service is reliable and can be applied. I have also to say that we have different strategy. We get the data as raw data that we perform aggregation in a new instance and then daily data. And we are running some load testing to identify the vocalness. So three different case study, two different wasgi server, what wasgi agave and two different type of user and different number of concurrent users. And we identified, we don't have yet the full conclusion, but get capability versus big number of sensor is an issue as it is known, big concurrent user versus time out. The software is not failing, but the time out with a lot of user can be a problem and very high frequency data versus the insertion time of the database. These are the main issue we have to do. So the next steps, we start the collaboration building up open hardware and open software. That includes some World Bank group and the International Water Management Institute and the Institute where I come from. We had a nice workshop in Venice at the Understanding Risk Forum. And we are starting in October a new project that is four time open non-conventional system for sensing the environment where we are going to apply our software, open hardware, open standard and open data altogether. And we are going to start the collaboration building up for developing countries. And if you like to join as a testing partner and collaborate, we are really, really open for this. And, well, okay, these are what we are going to do, developing basically the prototype, then provide the prototype to the user testing and setting up a real monitoring network to actually understand better what this kind of system can provide. And yes, of course, join. We had workshops and we are going to have another workshop at the OGRS meeting that will be in Perugia this time. It's Open Geoscience Resource Symposium, Research Symposium. And thanks you for your attention. Thank you very much, Maxi. Well we have five minutes for questions from the audience. No t-shirts now to win for questions in here. Yeah, thanks Max for the very nice presentation. So I've got a question regarding this web processing services you were presenting in the beginning. So are there any set up that are publicly available and could we use them, for example, with other standard observation services? Together with standard observation services. So for example, not with an IS source, but with a 5 to N source or another SOS implementation? No actually, not that I know. I mean, what we use it is having a SOS data source that we have some modeling implemented in WPS and use the SOS like to get the input data performing some formatting of the input data running the modeling. But not directly having input type SOS. Here is what you were talking about. More questions. So it sounded like you guys had to add a bunch of features to help make sure the system is production ready and you can use it. Are any of those, are you looking into rolling any of those into the SOS standards if it makes sense? Or the standards cover everything you just have additional components that run alongside the SOS service? Well this feature we have, I have to say that we are not really into these definitions, because you see definition of SOS. What we have simply done is to listen to our hydrologists that have 40, 50 years of experience of working with the hydro-metrological station that we manage and they were asking, okay, I need to change the data, but how can I do? Data quality, we have some processing, testing processing and we need to have information on this. And how do we do? And we try to start to think how it can be done in SOS, using SOS and at the end it comes out having some extra features, let's say. So when you make the request, you can add an extra parameter and you can get a specific response, which is still standard, but you know, it's a sense something. I don't know actually if it can be implemented in the standard or is something to be discussed. More questions, there's still time for questions. A couple of minutes. Oh, yeah. As far as your sensors go, are you basically collecting video as well? No. Okay. No, we are collecting, we are handling up to now only value measurements. Okay. So that answers it. Thank you. Any more questions? Oh, clear. Well, then I want to thank Maxi for this presentation. It's good to see diversity also in SOS implementation, so in Java, Python, what comes next?
istSOS is a complete and easy to use sensor data management system for acquiring, storing and dispatching time-series observations. istSOS is compliant with the Sensor Observation Service standard (SOS) version 1.0 and 2.0 from the Open Geospatial Consortium (OGC) and offers unique extended capabilities to support scientific data analyses (integrated quality assurance, RESTful API, on the fly processing with virtual procedures, remote data aggregation, time-space re-projection etc.). istSOS core libraries are written in Python while it easy to use interface is Web based. This presentation will illustrates the projects and its latest enhancements, including: The OGC SOS 2.0 standard implementation Authentication and Authorization System Alert and Notification system Finally the presentation will discuss the challenges that istSOS need to face for entering in Big Data showing results of scalability tests and ongoing new IoT driven development features. The robustness of the implemented solution has been validated in a real-case application: the Verbano Lake Early Warning System. In this application, near real-time data have to be exchanged by inter-regional partners and used in a hydrological model for lake level forecasting and flooding hazard assessment. This system is linked with a dedicated geoportal used by the civil protection for the management, alert and protection of the population and the assets of the Locarno area.
10.5446/20406 (DOI)
So, our next presentation is another geospatial library, Geomapfish, and we have here Emmanuel Bell to present. Please introduce yourself. Thank you. Thank you. So, welcome to this talk. I will present you the Geomapfish open source web geese application. It's a totally open source application. You can find it on GitHub. Server on the front-side are both open source published. Just to introduce myself, I'm Emmanuel Bellot. I work for Cam2Camp. Cam2Camp, we are an open source company that was started in 2001, so more than 15 years old now. We are a software editor and software integrator. This means we contribute heavily into the open source software we use. We are nearly 70 employees and we cover three countries, France with the Chambéry, Switzerland with Lausanne and Alton, and Germany with a location in Munich. So, Geomapfish. So, we use Geomapfish as a web geese application. For us, Geomapfish is a rich web geese application. There are tons of geese features built into Geomapfish. And one of the distinctions of Geomapfish is that it's a community-driven application. It's not Cam2Camp or some editor that decides how it will be. It's a discussion with the user's community and we decide together how the framework, the application will evolve. And one of the foundations is that we use OGC standards so that it is interoperable. So, I said there is the Geomapfish community. We have two times a year user group's meeting. And the goal of this community is to ensure the sustainable development of a leading open source web GIS. And this means that we have to target multiple aspects. First, we need to promote an inclusive dialogue between the users and the developers to decide the roadmap, the features, how the web geese will function. It's an open source community. So, we also need to protect that decision-making is made in a consensual way. It's not kind of a dictatorship. It's a consensual decision how it will improve. And also, as we have seen yesterday in the sustainability talks about open source software, we need to target also fair funding. So, we need to, with the community, ensure that we have regular resources stream for the maintenance of the project and also for the future development. And therefore, we target a broad contribution basis between the user group so that the level of contribution is kept, are kept as low as possible. And still, the contribution are on free basis. No one is, there is no mandatory, you can be in the user group and not contribute financially. It's not mandatory. But in the overall, this community function as is, and since 2011, we could develop two major version of Geomapfiche, one based on GeoX and was based on Angular.js and seven minor version with each time new functionalities. So, how is the user community built? It's different. It's a public sector. It's private organization. They all have in common the goal to publish so GeoData so that the public engineer or other guys can do geospatial analysis on a WebGIS. And all this community, they need a WebGIS for this. So, we have cantonnes, so like regions in Switzerland. We have communities or groups of communities. We have cities using the software. We have engineer offices. We have facility management, also like Lyon airport or the EPFL in Lausanne. And one of the interesting thing is that is this collaboration between different kind of users make that the requirement, the design of the specification, the design, all this ensure that the result is practical, it's usable, it's practice-oriented because we take care of all these requirements. And of course, we have a broader collaboration as we use open layers. For example, in Geomapfiche, we collaborate then also with national agencies in developing open layers, for example. So, what are the software architecture concept we have in Geomapfiche in this WebGIS? First, we target, we want, we like that the architecture is coherent. We don't want to have to configure the WebGIS in multiple places. So, our paradigm is we use one cartographic engine and we will configure in this cartographical engine the layers, the queries, the legend. So, everything is configured in the map service and POGC is sent to the client. So, all the map is configured in one point, it's in the backend. And in the case if you use QG server, QG is QG server, you also can reuse the configuration you've done on your desktop GIS for the server part. Then in each WebGIS, there is a search tool, so you can search for data, you can search for places. And here, our basic idea is we have one full-text search table in PostgreSQL and all the data you put in this table will be accessible in the full-text search. So, it's very easy to handle the search. And then we provide an administration graphical user interface to define the structure of the layer tree to configure the security. In this, with these different kind of users, we still need, it's an application, but this application needs to be configurable. Each one has, should have a choice about the layout, how it will look like and to activate the activate functionalities or to enhance the application with his own features. And here, our architecture is very, I would say it's mature because it allows to enhance the application without forking the basic common source code. So, you don't need to fork and this makes that updates then are kind of easy. I said there is a security proxy and so you can also restrict access to some layers or features or attributes within the WebGS. So, this is a table of all the features that geographic GIS GIS features that have been implemented to see that it's a broad, we have a broad perimeter. And the architecture, as I said, we have PostgreSQL. We can use different cartographic engines like we use OGC norm. So, it functions with GeoSev and MapSev or QG server. And the GeoMapFish application, it's based off a backend in Python and a front end with open layers and Angular.js. So, how does it look like? This is version one. We are about to release version two. So, this is version one. You have a classical WebGS user interface and of course we use WMTS for the background data. So, we saw, for example, a slider that allows to change or to adjust the opacity between different layers. We have a team organization that make it easy for no experts person to access to some specific preconfigured maps with, for example, yeah, historic data or poise or cadastral. So, this kind of information. Of course, there is a layer tree. All these things about WebGS, they are implemented. We've seen this is a kind of mandatory feature in WebGS. We have dynamic legend. So, this means once you have configured your WebService, WMS, WFS, you also get, get legend graphics, a dynamic legend that takes care of the zoom level. So, each time if you feature a feature type appear at a certain zoom level, the legend will update itself. And we do have a different kind of queries. So, you can query per point or per, per B box. So, WMS or WFS query and display the result in a pop-up or the same point or B box query and display the result in a grid with the data structured in different tables according their layer origin. We do have a search. I said everything is configured in the PostgreSQL full text search table. And here you can have the results either grouped in different topics like a commune's address or different point of interest or everything in the same more in the auto completion mode. And this works well. You can put also a lot of data in the, in this table and search for addresses, search for parcels, search for anything. You can draw red light, do some redlining on the, on the map with point lines, polygons and then change the color, change the size, put some text on it. You can print with the map fish print component which allows to select the zone to, to rotate the, the, the bounding box for the print. And you get either a PNG or a PDF file. In the, in the current version we use a map fish print and we use Jasper report. This means you can also, beside the map access to a lot of data that would be in your database or in your, in your environment. So you can generate proper reports. Like these can, these two examples. If, for example, natural hazard, we use it also to generate reports about street work authorization in case you need to dig holes in the, in the street. It can, there is a lot of possibilities. And here just an example about the security. Once you are, you can be either anonymous or you can log in on once you are, you are logged in, you have, for example, access to more data sets. Here an example of also an elevation profile. Or you can do also complex queries and exports. Also based on WFS queries. As we have also customers in the facility management area, like the EPFL or the airport. We have a floor slider. This means you can adjust on the, with the slider on which floor you want to display the, the map. And of course in version one, we have a mobile layout that use the same configuration that allows geolocation. And so it's also mobile compliant. It's multilingual. We have the graphical user interface to configure the layer tree, the security. You have an edit, geolater edit interface where you can draw points, line, polygon. There you can snap on, on the lay, on, on, on the laying WFS layers. You can also access to some predefined values in your drop, in your list. So some keys are possible for, for the, the possible values of some fields. Also, for example, dates or some nice widgets allows to an economical edit. There is a routing engine with open street map routing. And this, this WebGIS also provide an API like the Google map API. This ensures that you can, once you have built your WebGIS, you can also pair the API, integrate it in another content management system. So the, the, the value of your work, of your data is, is higher because it can be used not only in the WebGIS, but also in partner websites. Then you can customize the view to simplify it or you have the standard viewer. And as an example here, different OGC servers integrated into Geomapfish. So that was Geomapfish. Now I would like to present in short what we, the new version, Geomapfish 2. It's about to be released. Actually, it's released as a mass, on the master branch of Geomapfish on GitHub. But we, we are about to have the first beta tag saying the version is complete. Now we are in the quality insurance phase. So what's the difference? We would like, we wanted to target a total universal application or a responsive application. So it should run everywhere with just one code base. And so also take part of, you take use of Open Layer 3, which is much more powerful than Open Layer 2. So we use Open Layer 3 Angular.js bootstrap. And in order to build a modular software, we, we designed in the middle NGEO, which is a library combining Open Layers and Angular.js into directive. So you have functional modules. You can reuse very easily, like GeoX components, but based on Open Layers and Angular. On the server side, it's an update of Geomapfish 1. So there is no major change on the server side. If you have all the data you have configured in Geomapfish 1, they are still configured, rightly configured for Geomapfish 2. So we use OTC protocol and the MAPFISH protocol in some cases, like for, for edit. We use the Pylon, Pylon framework, like Pyramid. And we use the Closure Compiler in Advanced Mode, which ensure that the JavaScript part is very, very well optimized. And as it's, it's kind of a huge programming effort, we also have a lot of continuous integration, continuous integration test, so that we can ensure a good quality. So I would like to just to focus on NGEO before going more on Geomapfish 2, because as this provides, atomic components, this also allows to build synergies between different kind of application. For example, we have NGEO, but NGEO is used in Geomapfish, but it's also used in GeoNetwork. It can be used in other WebGS application, the components. It allows to build your WebGS, not starting from scratch. So the library is structured like with a core, with different core directive, and then contribs, and with Geomapfish, we added the Geomapfish, the Geomapfish contribs in NGEO, sometimes enhancing the core components. If we just have a quick look at it, you'll see that the functional part of Geomapfish, so the NGEO library, it's already kind of a lot of work has been done with quite a lot of contributors. And you have, so everything here with, for example, if you go in the source code, you'll see the directive, and here all the plugins that make it easy then to build WebGS. So something like about the layer tree, the grid pop-up, profile, all this kind of information is already combined between open layers and Angular. And so in Geomapfish, we use this, add more contribs, add more directive, and if you go in the examples here, for each directive, you have an example, and if I go to, for example, to the profile, profile directive, here for profile example, you have, for each directive, you have an example, and here you can see it's, there is not the layout, it will be done in the integration part, but still the functionality to have a dynamic link between your profile to display the profile and the map, it's already implemented. So you can use it in Geomapfish, we use it like this, but it also can be used in other WebGS applications based on open layers and Angular. So this is the new layout, it is a slightly new version from the previous one, it was this design in collaboration with our users, and one of the focus was we didn't want it to get, to lose them in this process. So it's slightly updated, but it's not a total different layout, so the users are okay. So we have the layout desktop, we have a tablet, UI, and a smartphone UI. So with the same code base and two different HTML gabarit, we can target these three kind of devices. And then the new implementation of all the features we had, with new features like drag and drag because the new frameworks allow this, in the electric we have the time slider to filter on WMS time, for example, the search result in a grid is always there. The print is now, it's also implemented, measure redlining. With a new feature in this case, we had measure and redlining in Geomapfish one, and now it's the same features. When you draw a point, you get also the coordinate. When you draw a line, you also get the distance. So all this measure and redlining is now one tool. Elevation and leader profile is always there, edit snap update with action on the features, it's there also. So a lot of everything has been migrated and it's about to be released as a beta version, but you can also already check the examples online. And once we have done this 2.1 version, we're going to add free views with cesium, we'll work on a more tighter integration with QG server, and I guess we also try to integrate with Geo-Orchester, the inspire special data infrastructure. So if you want more information, go on geomapfish.org, go on GitHub. The source code is there, you have, you find demonstration, online links of online web.js, you can visit us at our both. And if you want to practice, we are hiring, so just come and see us. Thank you very much. Thank you, Emmanuel. Thank you for your time. Questions please? Yes? In the security you mentioned, you've been isolated by nature and by attribute. Is there a way to isolate by client? That is, there are two logins on the same application, two logins by people from two different companies, see different data? Oh, yes, yes. So you have users and roles, and you can say, for example, this role, he access this layer or this perimeter within this layer. And some, another person from another role access, for example, the same layer, but another perimeter, for example. We use this, in case we have multiple surveying offices working together, exactly, they build one web.js, they have one, for example, cadastral layer, and then each of their client has its own login, and then see only within the polygon of the municipality, for example. More questions? Yes, please. Do you have some, do you have any questions? So we don't have, ah, yes, the question. Yes, I'll repeat the question. Do we have plans to combine it or to upgrade it to Angular 2? Or are there other possibilities of combining Angular 2 and open layers? So, about the geomapfiche, at this stage, there is no current plan to go to migrate to Angular 2. We have a roadmap with our users. The users have founded this version of geomapfiche with Angular 1. And now they need the time to migrate the application. They have also their plugin. So it's a collaboration. As long as there is no major technological improvement which benefit for the users, I guess we'll stay on this version so that it's, we have kind of a return on investment about this technology migration. And about open layers 3 and Angular 2, we have a customer who has started such a project to build, to use these two libraries together. So it should work, yeah. Why did we choose Angular? So we choose Angular. So we were using GeoExtra. And there was this need of a responsive web map. And this, I guess, nearly two or three years ago, two years ago. And so, at this stage, Sencha didn't provide this environment for responsive, so universal at that time. So we looked at what was existing in the responsive world and decided to use Angular. And I think we are pretty happy with this choice. Any more questions? So thank you very much for attending this session. This is the last presentation. You can move to other rooms. Thank you very much.
GeoMapFish is an open source WebGIS platform developed in close collaboration with a large user group. The second version offers a modern UI based on AngularJS. OpenLayers 3 and an OGC architecture allow to use different cartographic engines. Highly integrated platform, large features scope, fine grained security, reporting engine, top performances and excellent quality of service are characteristics of the solution. In this talk we’ll present the technical aspects of the platform and its modular architecture.
10.5446/20404 (DOI)
Next session by Angela. Thank you. Okay, so my name is Angela Olaz. My co-author has been Nguyen Tai, and unfortunately he couldn't come to answer your questions, but later on you can contact us. So our research is about the development of a new framework for distributed processing of big geospatial data, and this is a joint research of the Institute of Geodesic Artography and Remote Sensing in short, name is FEMI, as you can see here. And the University, the Elta University, located in Budapest, Hungary. The content of this talk is presented in this slide, so I'm going to give you a short introduction about our research topic, and then I'm going to introduce a project called ICUMULUS, which is related to this work, but I'm not going to give you a detailed introduction on the project. And I'm going to continue trying to define what is geospatial big data, what are the differences to big data, which is not geospatial, and what are the differences to geospatial data, which is not big. So, and then I would like to show you some comparison of existing solutions that we have tried to compare some software or frameworks, how they are doing now the distributed processing, and I'm going to present also what kind of user requirements that we have selected to compare those solutions. And then I'm going to present IQlib, which is the main development of our research, and its modular structure. It has a modular structure, so I'm going to present the modules and the actual development status of those modules. And I'm going to conclude in the last slides and some thoughts about the future work. So, introduction, our goal is to find a solution for processing big geospatial data in the distributed ecosystem without any limitations on programming language, as well as data partitioning and data distribution among the nodes, and in order to run existing GIS processing scripts. As a first step, we focus on raster data representation, for example, the composing those data sets and then distributed processing. Before building this prototype system, we have analyzed the data decomposition patterns, how can the raster data set can be decomposed and then processed on the different nodes, and then defined the common GIS user requirements on the processing environments for big geospatial data. So, we have some user requirements that they would think that is important for framework or toolkit that is supporting distributed processing and also identifying the geospatial big data. Some thoughts about the ICUMULUS project is this research is related to the ICUMULUS, which is about how volume fusion and analysis platform for geospatial point clouds, coverages and volumetric data sets. So, this is the main goal of the project, so it's a platform. This project is going to be finished in this November. As a result, we are going to have this analysis platform, so as we have tried to define everything on this platform, it can be available. So, we are in the consortium, which is formed by 11 European partner institutions, and we are from Hungary. And if you want to have more information on ICUMULUS project, please visit the website, ICUMULUS.eu. So, defining geospatial big data is not an easy task. It is the well-known definition where you can start to exceed the capability or the capacity of the current computing background of your available system or your available technology. And there are others in the literature that you can find that there is quite a big number of variables because why it's not so easy to define what is the margin between geospatial big data and geospatial data. So, some of them are also admitting that is also a use case specific. So, it is also hard to define what is big data for one user and what is for another. So, we have tried to compare big data, which is not geospatial, and geospatial big data and geospatial data as a short review from the three kind of data representation, data format for vector raster and 3D representation, and then also compared those to non-geospatial or text-based data format. And in this paper, where it maybe is going to be published until now, I don't know, we have collected those aspects for these three main theory or main definition in the representation and also in the storage and processing background of the requirements as you wish. And then this table is continuous where we included the existing solutions on those formats for each of these three definitions. And some requirements that would be useful to have from an existing solution. So, we have collected the most popular framework supporting distributed computing on GIS data. And for example, we have selected the following aspects, which is also then listed and included in a table and made a comparison between them. So, we would admit that input and output data types are of course important, what kind of data they are supporting. If already existing GIS processing or executables are supported to run or not, this is the main point, one of the main point, and what kind of data management they are supporting. Supervision of the data distribution and especially for the raster data type. So, we would like to have full control of what data chunks are going to which node and then getting back the process data and other aspects like scalability potential and supported platform and so on. So, we have collected all of those informations and tried to compare the existing solutions. And this table is already been published in a paper and then in the later slides I'm going to present where you can find. So, after all, and also from an experience from the ICUM-Linus project, we thought and we admitted that most of the cases that is the full control over our data partitioning and data distribution mechanism is not supported. So, and also it's not really possible to run already existing executables or scripts in a platform or in an ecosystem. So, we decided to develop our own distributed processing framework. And this has been initialized by three project partners. One is Aegean France, the second is CNR, Imaati, Genova and Fermi from Hungary. The name is IQlib and this IQlib is going to be a framework that is supporting the data decomposition as core functionality. This is called tiling and then data distribution and distributed data processing. The second has a second main functionality and then IQlib is also providing a functionality to stitch those results. So, this can cause that we can overcome the scalability limitations of the processing algorithms. So, the high-level concept is formed by this way. This is already a bit, could be updated because there is a new module that I'm going to introduce in the next slide. So, the main thing is there is the tiling and stitching part, the data distribution part that we would like to apply and then the already existing processing remote sensing or GIS scripts can be run on those datasets. As a researcher from the National Mapping Agency, this would be very useful because we have already developed our operational GIS processing in a different system or in a different language. So, this would be very useful if we can have a framework that this, that this can be somehow used for the processing in a distributed way. So, those are the modules of the IQlib. There is a data catalog module which is responsible for storing the metadata and also not only data about the data but also data about the processing. So, we would like to have also information on the data chunks and then also on the results and those are going to be stored in the data catalog module. So, there is a tiling and stitching module which is responsible to, to tile and to stitch. So, the metadata of the tile and also the stitch data is going to be also in the data catalog as I mentioned before. So, there is this new module that is responsible for the data distribution and there is also the processing module that is responsible to run those scripts on the already distributed dataset. So, the status of those modules, the data catalog is almost ready, is only waiting for the final approval from all partners and then is going to be available in an open source way. And in the first figure you can see the architecture of the data catalog module and the used software and then the tiling and stitching module is already defined and also we have the second figure. So, here you can see the architecture, the high level concept of how it's going to work but this is still under planning phase and then the data distribution module and this is a new one and it is currently supporting SFTP protocol only. But the data partitioning and the data distribution algorithm could be extended by third party developers. So, if you would like to add some ideas, do not hesitate to do it. And then distributed processing module is also under development and you can see the figure on the figure that the architecture of this module looks like this. And all of this information can be found in GitHub. This is IQ Live specification but it is going to be presented as dedicated IQ Live GitHub as well. So, as soon as possible, as I know, but the specification and technical details are already there. Okay, so the related papers, we had some presentations already and papers in ISPRS about this topic but we need of course a lot of work to do and the future work. So, we would like to finish those implementations for all of these modules, testing IQ Live for the following aspects, running existing algorithms. And then experiment execution on big geospatial data and then benchmarking mainly on the processing time. So, thank you for your attention and I also would like to thank to IQumulus and to Daniel Christof for letting us to be here. Thank you. Thank you. Questions? Thank you very much for the presentation. That is very interesting. There's one thing that was not completely clear for me. So, the algorithm itself, it runs over a distributed dataset but is the algorithm distributed as well? For instance, if you have a sequential algorithm in R or something, the algorithm will still be non-distributed. If it's possible, as I mean, if it's possible by the algorithm, which is written as it is possible to be parallel, so it's possible of course. So, you support both? Yes, we would like to support those because without this, it's not possible to run. Thank you very much. Hi. When you distribute the computation, there is some form of load balancing that is involved inside this process or not? This I don't know. So, maybe we can contact my co-author, who is the developer, so he can ask for your question. One more question? Am I right that all data is tiled during loading to a system or not? If the user wants, yes. If it's not needed because we have enough power to make it not tiled away, it's not needed. And the vector data with tiled, can it be editable after this? I think it's possible, but I don't know because we are focusing more on raster datasets until now. So, the other project partners working more on the vectors. Thank you very much. What kind of algorithms do you think can be run in this framework? I think we can have, like I have shown something in this here, like R, Python, Java, and Matlab as I hope. But if you could add something, I would like to know. Well, I was thinking about the processing itself. I mean, if the input is tiled, then it has to be local somehow. It can be a global algorithm, for example. Yes, it depends on the algorithms that is something related to the first question. Possibly. Maybe I can answer this question because at Mapbox we have the same problem. We solved it similarly in having a MapReduce model. And one application that we use it for is finding missing streets in open street map, which is a local problem that is defined to one tile. And we use telemetry data to figure out, oh, a lot of people drive on the street, but it doesn't exist on the tile. More questions? Okay, thank you.
The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. However, most current distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Hence, this paper presents a prototype for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept developed in the frame of the IQmulus EU FP7 research and development project. The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. Our intention is to provide a solution to perform a wide range of geospatial processing capabilities in a distributed environment with no restrictions on data storage concepts. Our research covers methods controlling data partitioning, distributed processing and data assimilation as well. Partitioning (also referred to as “Tiling”) is a very delicate yet crucial step having impact on the whole processing. After algorithms have processed these “chunks” or “tiles” of data, partial results are collected to carry out data assimilation or “Stitching”. The paper presents the above-mentioned prototype through a case study dealing with country-wide processing of raster imagery. Assessment is carried out by comparing the results (computing time, accuracy, etc.) to concurrent solutions. Further investigations on algorithmic and implementation details are in focus for the near future.
10.5446/20403 (DOI)
Okay, so let's go for the second talk. Erik Mirbysh from Netherlands, who's going to tell us that Node.js is easy? No it isn't. Not at all. I'm going to talk to you about train user and go to move over to kind of a meta discussion about why we should train users and how we should train users. I'm not going into detail what kind of program you have to use. I must say I found the speaker before me that was most interesting, was really good. I'm definitely going to use that. That's excellent. Train user. My name is Erik. You can find me all week, just in case I forget to tell you at the end. You can find me all week. I'm here and I'm at the base camp. I've got the Rockabilly Caravan. Yay, so that's cool. How many of you are educators by the way or trainers? Oh, it's about a third of them. That's nice. Welcome. How many of you will call yourself what we call the end user of GIS? That's a few and developers? That's most of them. That's a half. That's cool. Okay, thank you. Good to know. I found at the Geoacademy. From now on I'll just say Geoacademy because it sounds nice ringing. But I found that about five, six years ago and we do that at a place called Geofort in the Netherlands. If you come to the Netherlands, if you've never been to Geofort, go there. It's nice. But today we're going to talk about training. No GIS is easy. So who do we train? As you can see here, I've marked user, some extra underlining it, because we have to talk about users here. If you look at groups, we see about half of us here are developers. And then there's the real users. Well, real. Let's see. These are real users. This is one of our training groups. You see people here from municipalities, from the Dutch catastor. Self-employed guy, firefighter is standing in the back. So that is my, well, the group I work with. And they have different needs from most of the people here, most of the developers. If you look at those real users, you can, I'm modeling a bit, right? If you look at real users, you can define two groups of users. The ones who are proactive, who go out and find new things and learn themselves how to do things. They are the more conservative users. The ones who say, well, it's nice, Jess, but I've got a job to do, and my main focus is on doing my job. And my software should support me. And if I spend extra time finding out all kinds of things, that is not my job. My job is doing the right things at the right time, in a minimum amount of time, and doing them well. So those people are most important in most companies, most organizations. If you look at the training material that's online, in my opinion, most of the training material online is made for those proactive users, the ones who go out and look for stuff. But what about the other group, ones who don't go out, who may be not even speak English? Most of the material is in English, of course. Yeah, it's a bit of a problem for some people. So what we do is we bring them back into the classroom, and we do that a lot. And we teach them, but we let them teach themselves as well, because people, they have these different workflows with GIS. And what we find is that there are people from several organizations who do more or less the same things with GIS, and we don't even know about that. So we let them help themselves as well as much as possible. But that is a bit the situation at our place. A second storyline I want to follow here is about sales. I've worked for a software company, GIS software company that sells software. And if you look at the way sales is running through those companies, you sell a software solution. Together with the software, which, well, it's about half of the total cost there, you sell a lot of services, implementation services, help, et cetera, things like that. And a third part of that is that you sell the training to the product. I mean, you're the supplier of the product, so you are the one who should know most about it and who should tell them. So you sell three things. Now what do we do in open source? We don't sell the software. Huh. So what do we sell? I don't know how it is in we are from, but in the Netherlands I see that most people, they sell services, implementation services, just build some extras to it and things like that. And that is the main product of the Dutch open source community. They sell services. So what about training? Well we have to go back to those users again, the real users. If you define the real users in proactive and conservative as I do, and well, you could also map them out on a scale from the level of their knowledge or their use and the number of users on the other side and you could make a nice distribution out of that and I just guess it's a normal distribution. I don't know. Might be. Didn't really research that, but it's assumption. And you can see that the proactive users, they are mostly on the right. They know a lot already and they want to find out more and they learn more. And the more conservative user, who does the thing he has to do, he has to work with the software. I think that if we take that distribution of people and we split it up in those two groups, the proactive ones are on the right and it's not like it's split in half. They're on the far right. And most of the users are conservative users. Now for these proactive ones, online, the material that's now online, they have a perfect fit in that. They find what they want and that's okay. But what to do with the bulk of the people? And that is one thing that I'm a bit worried about in open source software here, especially for Geospatial of course, because nobody sells the product. So who trains the users? Will a government start using open source GIS if the people working at the government will not be trained? It's like, yeah, well, we got free software and everything and just, you know, it should be easy, find out how it works. That's not the way it works in the Netherlands. So for those people, we need a hands-on trainer. Hands-on trainers have a very big plus. A good trainer tries to build his classroom into kind of a mini community. Might be a community for a day, for two days, or for half a year, but it's kind of a community. So you celebrate successes at the end. And that's what we do here. This is a good one. This is the way a training course should be with a happy ending. Okay, so how do we do sales? If you look at the current model, if I'm just modeling a bit here again, you see that we have the service supplier. In the Netherlands, we have, for example, the OpenGeo group, OpenGeo group. That's a nice translation. We've got a lot of smaller companies as well, a combination of companies like the OpenGeo group. And they tend to go to the decision maker and they say, well, we've got this nice software product. Maybe we could have this installed and service you with it. And training suppliers, we do the same thing. Hey, I hear you start using QGIS. Maybe that's a good thing. Maybe I can help you with it. And if everything goes well, it ends up like this, that you have nice communication and you start doing a training session together. Another model that we've seen before is this one, where a question comes from the decision maker and he asked the service supplier and the service supplier just asked the trainer, let's do the training session. And let's make a combined effort here. Let's find two. What I really am looking at at the moment is this. Why not build a package together? We've got great software. We've got excellent software. We've got a lot of companies who do great implementation services on those software. And we've got a couple of training suppliers of which Geoacademy is one in Netherlands, but probably there's something like Geoacademy everywhere in every country. Let's work more together. That is the main message here. If you look at what we do here in these three days, it is excellent. We're all joining in together and we find new things and so why don't we do that in the rest of the year as well? Now I've been talking about this decision maker and this decision maker. Where is he on this scheme? Any idea? Sorry? I said I think he was just... Yeah. I don't know, most of the time he's not there. Because a decision maker most of the time is not the GIS user. It is somebody else. And we don't even meet him or maybe we do. Because a lot of these, especially the smaller open software companies, don't have an account manager who runs out to all the decision makers and they don't even meet him. It's a funny thing. But let's get back. As I started this presentation, GIS isn't easy. And that means you need training as a user. Now a trainer, trainer that is a professional. A trainer is a profession. And that is something different from the person who knows how to develop the software as an expert. That is a totally different skill set. So make sure you have a good trainer. The second thing is all training materials online ditch them. Unless they're from your country. Most important is have training materials in your own language. If I take GIS, I'd say that 70, 80% of the people I teach GIS to are not able to do that if everything, wouldn't be able to do that if everything was an initiative. Because it's just, you have to learn two things at the same time. And the third is use examples and exercises that a customer can relate to. So again, ditch all the online materials unless you wrote them yourself and come up with good cases, good solid cases that people can relate to in their work or in their hobbies. But that is most important. Now those three things, if you do that, you have a good training. Now I don't know, maybe you'll do this already. But judging by how many of you are here, which makes me kind of nervous actually. I really like to help out. And there's probably a lot more people could do that. But if you don't have one of these already or if you're thinking about this, I really like to help out. And as I said, I'll be here all week. And you can find me at the evenings at the Rocker Billy Caravan. Cheers. Thank you, Eric. So we actually have quite some time for discussion, questions, ideas. So whoever wants to start. Hi, Lena Fischer, University of Copenhagen. And you were out in the field, weren't you? Yeah, I was. Mobile? Yeah. I'm in a lot of places. That's cool. I come from a forestry college. We have students at the bachelor level. And we also have those company training courses two or three days. So I think you have to differ between university level and company level. From what you're talking about here, for short courses, for sure cases where people can relate, that's important. But for students who are going to develop and keep on learning, I always say to them, go to the internet, find tutorials, be the proactive ones, and then go solve the cases which I give you. So you have to take in mind what kind of users you're talking about. Yeah, that is absolutely right. I did a lot of teaching on a bachelor geo information school as well. What I try to do there is try to teach them spatial thinking. And I really don't care in the end of the day what kind of software they use for that as long as they get the job done. That is the main thing with students. But they have to learn the profession of being a GIS professional or a geo professional, anyway. And that is something different from what I was absolutely right on there. Thank you. Other reactions, questions? Part of the problem we have on cooperation is the licenses that we release our training materials under. I've been paid to write three different GeoServer courses over the years. And they're very good GeoServer training courses. They're all released under non-commercial licenses. So I can't use my new employer. I can't use those courses again. I don't know what we can do about it. Everybody is worried about being ripped off and losing the contract to somebody who hasn't paid for three weeks' work writing the course. Yeah, what can I say to that? I try to reuse as much as possible. What I do if I make a training course for a certain specific company, I always tell them that I use as much as possible. Normally I'd say I'm reasonably priced, not too high or something. But it's because I reuse a lot of material. So I don't have to do everything again and again and again. And I ask them so I'd like to do the same afterwards again. Because if there's one thing I don't like, it's rewriting the stuff I wrote before. I mean, it's a bit silly, isn't it? So I try to take care of that in advance and not afterwards. But yeah, well, that might help. I have an additional comment. Your name brother, Geoacademy in the States, they have actually open courses. They have five or six open courses. And they are all on GitHub. So you can actually make a fork and put in your own cases, which I have done with Danish data. And I can really recommend those with a lot of video tutorials, a lot of practical. And they also have open courses. So look into that too. Yeah, that is very, that's a good idea as well. But it's okay, there's the but again, shouldn't do that. Again, I'm thinking that it is for a specific target group. I think I love people. I'm really into that. Using Beats personal contact and listening to somebody with a problem. And it's just it helps. But it's personal opinion. There's a question that there isn't really a right answer, but I'd just be interested in your thoughts on exercises and how detailed or not detailed you find you have to make them from. One end of the spectrum, here's a problem, solve it. Here's the software, here's some data, do whatever the problem, ever needed to solve the problem. The other end, here is here is the steps. And if you follow these steps, you will get to the end. And clearly, there are circumstances in which you need to go to one or other end of the spectrum. But I wonder what your approach was and how you'd experience that with different bearing mind different groups of users will react differently to different approaches. We have a 20 day course and where people who are not schooled in due information but have to use it in their work, we retrain them. So we bring them to our geofort location a day and a week for half a year. And that really works. But from these, especially after day four, day five or so, they are always complaining that they just can't follow the steps because there are no steps. Because we don't do that. We help them with the steps. But we never, you know, these thick training manuals where you have to follow step by step and you do 20 pages per half hour or something like that. I was so fed up with that kind of material. I just don't do that. Sorry, can you work with the microphone because otherwise the streaming won't work. Yes I was just saying you kind of pull back the written support in order to try and get people to remember what they've been taught and use techniques. And after day 20, they always look back and say, okay, we had to relearn learning. But this is a much better way because we have really made it our own knowledge now instead of following steps and forgetting about it the next day. Hi, so I have a question. If do you invite your students, the end users, do you recommend that they come to attend events like this as well? And if you had a chance to partake in the organizing, what would you add or do differently perhaps to make sure that end users felt more welcome or you would give them a reason to come here? I always do that. That is one of the main things. Go out, explore the world. And they never do that. I'm not sure if there is a way to get people who don't go to conferences to get them to conferences. For some people it is natural to go to conference. For others it's dangerous way off. Just leave everyone in their own personal free habitat. I'm fine with that. I know it happens. I keep on telling them. I don't mind. I'm cool with that. But I would really love it if all the end users would be in a conference like this. That is true. Okay. I have one more question. It's about what I think is a drawback about a lot of the open source software. Is the lack of a proper help menu? It's also the stuff I hear from most professors at university. Do you think that is something where you could add maybe, or maybe you do, I don't know, the stuff you teach you also put it somewhere or maybe there is an option to put it in a help of the software you teach? I would be happy to do that. Here is my personal affiliate here. I don't know how to do that. So help me with that. And I would be happy to do that. Any other questions, remarks? Thank you. Okay. So thank you very much, Eric. Welcome. Thank you guys.
Every software platform needs three things in order to be successful: Good and solid software, reliable support and maintenance services in order to make it run smoothly, and training possibilities for the endusers to make sure they use it in a most effective way. Open geospatial software is the basis. And, with the progress that has been made in the last few years, it is among the best around. Support and other services are more and more common. In the Netherlands, you can identify several commercial organizations specializing in FOSS4G, and others adding it to their existing portfolio of services. But what about training possibilities? Few specialized training services for geospatial software are existing, and service providers might do some on-the-job training, but it is by no means comparable to the well oiled training machine that distributers of closed source software usually run. So, how to set up training courses for open source geo? At the Dutch Geo Academie we have some experience with this kind of training, and we’d like to share some ideas.
10.5446/20402 (DOI)
Okay. My name is Simone Giannichini. He's Andre Aime. We are going to talk a little bit about how to serve. It's not going to be only at observation data, it's going to be, let's say, spatial temporal data, so it's going to cover a little bit also, meteorological, scenographic data, raster data that has additional dimensions, like time, elevations, and so on. We bought work for Geosolutions. It's anytime in SME. We are in Europe, so we can use that word. SME is a small company. We do work mostly inside the G-server community, but also we do work on other things, like web mapping, MapStore, Metadata. Everybody loves Metadata, especially in Europe. If you don't know the word Inspire, you know what I'm talking about. So it's Q-network. We started to work with GeoNode a little bit in the last one year and a half. I've been working with Seek and Sins like Aegis for the OpenData portal. So we tend to work with these tools, inside these tools, outside these tools, etc., etc., giving services. This is the usual use case that we tend to work with when we use this terminology. I always like this slide. I used it the first time like 10 years ago in a military environment. That is where these terms come from. Right? This recognized environmental picture, or common operational picture. Because one thing that military is always have is a lot of data. They usually tend to fuse forecast data. That's why you have METOC, meteorological and telegraphic models, with in-situ observation as well as remote sensing. Because you want to compare what you forecasted with what is actually happening. Okay. Okay. Can you hear me? Yeah. So some basic about the Image Mosaic. The Image Mosaic is our basic tool to provide Image Mosaicking and multi-dimensional of 1D and the data sources. So an Image Mosaic is just a collection of granules images that are put together by an index. The index contains the location of the files, the rebounding box, and then several attributes that you can filter on, typically dimensions, so time and elevation. But you could have extras and that's actually quite handy in that you can query whatever attribute you attach to it. There are a few assumptions working on the Mosaic. Let's put that. Pete? So there used to be the requirement to have all the granules, all the images in the same color model, so all RGB or gray or palleted, but that has been removed in 2.8. Now you can mix different color models to some reasonable extent. Like you cannot mix a digital official model and a satellite image at the same time. They are too different. In Gisopa 210, which is going to be released in October 2016, we are also going to remove the limitation that all the coordinate reference systems have to be the same thanks to Devon here that did the work. So we can have a Ethereum Genuse Mosaic in different projections. Granules can overlap as they please. That's not a problem. You can control the stack in order. So there's a way to sort, I don't know, by date or by resolution or by whatever other attribute you think can drive the importance of a granule and push it up in the stacking. And granules can be in different file formats. The index, the index is what makes the Mosaic tick is normally just a table containing all the information about the granules. Typically it's implemented by using a Geotools vector source. So it could be a shapefile, POS GIS, Oracle, or H2. Each one has its own advantages and disadvantages. Shapefile is really, really easy. It's what we use by default if you just throw the image Mosaic at the directory and tell it, well, OK, give me a Mosaic of whatever is inside. It's going to Mosaic it using the shapefile. But you can also set up some control configuration files and make it Mosaic stuff in a database, which is handy in that you can index the attributes, make for fast searches. And it also means that you can drive the Mosaic by the database so you can ingest the directly new granules in it if you want to. The dimensions, the dimension map to alphanumeric attributes in the index. So typically dates, times, and numbers. But it could be pretty much everything. So we can have also custom dimensions besides the basic timing elevation. And as I will show later, they can be advertised in the get capabilities documents of the OTC services so that the client can discover them and they can discover their domain so as to make queries on the multidimensional data. Typically when we play with data sources which are not natively multidimensional, so we are not talking about an SDF here, but a bunch of duty files, each one associated with a particular elevation and time, we have to get out this information from somewhere, like the time, the elevation, and so on. Typically in these environments, those information are coming from the file name itself. They could be embedded in some file later, but that's less common. So we have a bit of configurable machinery that you can set up to extract bits and pieces out of the file name using regular expressions. Once you have those set up in your mosaic, you can go in the user interface of GeoServer and enable the publishing of time elevation and custom dimensions. Here we have a snippet out of a capabilities document showing the time. And then we have a long list of times which are the available ones, and then an elevation and the list of elevations. And then we also have an updated and a file date custom dimensions that we added and that you can also inspect and use in get map requests. The fun stuff about image mosaic. What's the fun stuff? Well, mosaicking together images in space and in time or over dimension is fun. It's interesting, but we can do more. We can do masking, for example. So we support both masks in vector and raster format. They can be the GDAL style binary mask embedded in a GTI file, or they can be sidecar files in WKB or shapefile. And if you are not happy with that, you can plug in your own API as a mask provider to get the masks from wherever. Why do you want to use masks? Well, I'm going to make you a few examples. One is compression. Maybe your compression is ruining the data, the borders between black areas and valid data areas. And you wanted to retain only the good part, and you cannot do it anymore by just saying black is going to be transparent because of the border you have compression artifacts. That's one case. Another case, you are a satellite company of sorts, or you are getting aerial imagery of sorts, and you got flares and clouds in your images. You don't want to show them. You use masks to cut them out of the imagery. On the fly. On the fly, yeah, of course. Coverage views, it's another thing that we do with mosaics and the raster machinery in general. So sometimes you have data sources which are a bit complicated, like an XCDF-7 grid, and they can contain multiple phenomenon registered in the same file. And sometimes you get this case, typical case of wind files, that they contain actually two rasters, two separate rasters, which are the UND component of the wind. And if you follow strictly the OGC model, you would have to publish them as two separate layers. And that's not very useful because they are related. And it's the relation that actually carries the useful information. So we have a tool called Coverage View. It's sort of parallel to the SQL views that we have in the vector world, in which we can put together the two coverage views as two bands of one coverage. And at that point, we are publishing just one layer with two bands that we can then use to create wind barbs and the like. So it's both visualization and download. As I said, the index can be queried fully. So if you just want to use WMS-T, so the OGC standard extension for multi-dimensional, you can. It's fine. But you are limited to say, I wanted this time, and I wanted this elevation. What if you have more attributes, or you wanted to express more complicated queries? Well, in just a way, we have an extension that allows you to just put a SQL filter in the get map request and write whatever query you want. Like in this case, I'm making up a case in which I say, OK, give me anything that's in the index where the sensor is a SAR, and the satellite is this particular satellite. And I will get a mosaic of only the granules that satisfy that requirement. So you get a lot of power. You can literally build a very complex mosaic with several data sources and then filter dynamically what you want to see, depending on the user, their access rights, and whatnot. The image mosaic is also quite interesting in that it's very pluggable, so if you are a developer and it's not satisfying your requirements out of the box, there are a number of extension points that you can plug into, write your own little jar, your own little library that adds extra functionality to it, such as extra ways to collect attributes out of the granules, figuring out what coverage each granule is attached to, preprocessing the granules, deciding whether or not a particular granule is part of the mosaic or part of a particular coverage in the mosaic. The recent work about merging together images in different coordinate reference system is done by a particular sub mosaic producer. So that's the part where you can put together the images. And of course, you can build your own. And I don't know. For example, we don't do image merging. Like we don't average the pixels. We just stay on top and what's below it and become sit. But we could build, for example, something that merges together the images instead by some sort of alpha blending. That would be a new sub mosaic producer. We have the catalog, which represents the index. It's a way to access it. And that's also pluggable. And we have a few implementations out there. And Simone is going to show some examples where we are actually accessing a legacy catalog, which is implemented as a service. And it's not the usual database. NetCDF will pass the ball to Simone. OK. If you're still not sleeping, it's too late. Because now things will get more clear. We actually started in the wrong road, I believe, because we should have talked about the formats before and then the mosaic afterwards. Basically, the mosaic is the abstraction on top of which you can actually tell yourself to serve data directly from complex formats. So that's the point. If you're wondering why it's this sophisticated, we shouldn't use the word complex. I've been told it's sophisticated. The point is, we try to reduce the time that it takes to go from data to serving data. So that means preprocessing should be quick or no preprocessing at all. But most part of the real formats out there are actually pretty complex. I can use them more this time. If you ever work with NetCDF or Griebe or HDF, although now it's less popular, you can find everything in it. People claim to be conventions compliant. And every time you find new things that are actually more or less compliant, but nobody thought about using them that way. So it's always something new. That's why the mosaics, it's that configurable. It's that sophisticated because it allows you to leave data in the way it is originally. Minimum preprocessing, you go directly from the data to serving. And this is especially important when you work with, for example, NetCDF data. As you might know, there is some formats which are like a de facto standard in certain communities. Like, for example, NetCDF in oceanography data, although it's used also for some of the most spherical models. Griebe for meteorological data. It was HDF 4M5 for remote sensing, et cetera, et cetera. There is a certain number of formats that people are used to. And you don't really want to reprocess them entirely most part of the time because actually, although they are not meant for serving directly, they are mostly meant for shipping data because they contain a lot of metadata, not XML files, but, let's say, ancillary information, they can be used directly for serving. So if you know what the NetCDF is, think about what is a meteorological model or oceanographic model. Think about doing forecasts for temperature in a certain area, certain forecast time, multiple elevation. NetCDF forgives are actually contained that allows you to put all this data in a single file. So you can get huge files, even two terabytes, that contain forecasts for like seven days, six days. It's usually not more than that. Six, three hours, 12 hours. So multiple elevation. It's many. Most part of the time, it's a huge number of relative small two degrees. So the point is not about pre-processing them a lot, because the degrees that you're going to use are usually small, but it's about being quick and finding them inside this file. So that's what the image mosaic is doing most part of the time. It's trying to make sense, let's say, flatten the structure of this file and be quick at finding directly what you need. We worked on the support for NetCDF. This was funded by DLR and actually Geosolutions. A couple of years ago, in the past, we were always going from a GRIP and NetCDF to stack objective files. But as I said, I was taking too much time. So we actually beat the bullet and decided to go directly and support NetCDF. If you know what NetCDF is, supporting NetCDF means like speaking all the languages in the world. So we ask you to do your work properly. If you produce data, you can tweak your models most part of the time if you want to. And at least support the course convention. If you don't know what conventions are, I'm not going to explain, but it tells you how you should structure your files so that you can understand what's in the files without having to hold you all the time, which is the baseline for a CF convention, which is the most widely used climate and forecast. Basically, what the mosaic is and what the mosaic does, if you remember Andrea talked about the granular index, it goes from a structure like this, multiple hypercubes to something like that. So when you ask me a certain time and a certain elevation and other dimensions, I know directly what I have to read out of the original file. Another thing which we did was actually, it's related to what Andrea said before, breaking the limit that was in your server that you could have only single coverage from a single file or if you want a single store. Because as Andrea said, most part of the time in these files, being larger containers, you have multiple coverages. You can have hundreds of them. You can use coverage views to create new ones by mixing them and turning different geophysical parameters into bands. That's what you want to do when you have current fields, wind, currents at sea, things like that. You need the two bands together if you want to do nice rendering, like winds there, or et cetera, et cetera. And this in the past was actually requiring pre-processing, because you had to take them separately, create a file with two bands, et cetera, et cetera, so it was taking time. I won't talk about this, but this is the full model of the Mosaic. Andrea showed you the simple example of the Mosaic. But if you want to support full net CDF, there is a very nice XML configuration file. Everybody love XML, as we said. But you can actually tell yourself exactly which geophysical parameters you want to be served out of the original file. Because most part of the time, out of these big files, you will only want to use a few geophysical parameters, not all of them. Usually, half of them is controlled parameters, so you really don't want to serve them. But instead of asking people to pre-process them or reprocess the data and get rid of them, you just tell yourself to ignore them. Support for coordinate reference system, it's been added. I talked about convention. This is how you specify coordinate reference system in an CDF file using the CF convention. So if you follow the convention, we have stored all the definitions, so we can create a coordinate reference system on the fly without having you to configure anything. Limitations that we have, the conventions, as I said, and not much more. Well, I go relatively frequently on this. The basic support for net-CDF allows us to actually turn this complex, it's net-CDF and GRIB. If you know what I'm talking about, it's mostly the same library. Although it's different formats, there is a certain overlap of the two models. So with the same library, you can serve both of them. The image mosaic is actually used once you have made sense of the internal structure to allow you to create a long time series of this data. Because usually models or remote sensing, you actually acquire similar data over time. It's a constant flow data. So once you have configured this flow, you can keep adding data. The use case is usually different. Because when it's forecast, most part of the time you don't want data which is older than 3, 5, 7 days, then it gets long-term archive. It's usually not used for direct serving. With remote sensing data, it's different because you tend to put online relatively long archives. I'm not saying that people throw away methodological or geographic models. But they are usually not that useful after a while. Actually, you tend to run models multiple times a day. So you get multiple forecasts. And you always want to use the freshest forecast. For a single time. Well, it would take a little bit of time to explain in detail how you can use the XML definition for actually telling your server how to extract information from an EtsyDF. It's basically not that complex. It's relatively sophisticated. You can actually define which dimensions you're going to support, time, elevation. You can have additional dimension. In this case, we actually have four. Because we keep track of when the file was first generated and when it was updated. Because it's not that practiced sometimes to rerun the model right away to update the files. And then you actually tell your server, OK, harvest the information from the files, put them in a database, and expose only these variables using the domains that I defined. So what you end up having, as Andrea showed quickly, is actually your data plus metadata in a database, rows, that describe each single slice out of your data. And that's what your server uses, where you're going to serve them as WCS or WMS. Every time you do a request like get map, time, elevation, et cetera, et cetera, we use the index to instruct the machinery to get the data directly from the original format. So if you know what I'm talking about, you can go from model to WMS and WCS in all the data it takes to move the data. For indexing, we created a few rest operations. Because as I said, it's usually constant flow of data. So once you set this up, you just want to add data and remove data. And there is a rest API to do this operation. Add in data, query in the index, update in data, and delete in data. So once the flow is set up, you don't have to do anything. As I said, there is support for GRIB and partially GRIB2, because we use the same library. So as the library underneath gets better, the support gets better. Yeah. So once you have set up your data, you have configured it, you can start playing with it. So a few examples on what kind of request you can make. So here is a get map with the time and elevation and two custom dimensions extracting a particular slice of data out of the multi-dimensional mosaic. When we display, when we have this kind of raster data, sometimes it's good to display based on a false color or a color map, but sometimes you want it to extract particular features. So we can use the rendering transformations, which are an extension to SLD that we added in G-server. And that has been particularly well optimized for performance. And you can use it to extract on the fly from your raster data, contour lines, current fields, and wind barbs, and more. The system is completely pluggable. So if there is an on the fly transformation that you are missing, you can implement it in Java and just drop it in G-server. On the fly in this case means that you actually configured your mosaic without pre-processing. You set up this style, and you get wind barbs out of the raster data directly without having to pre-compute them. And it's decently flustered. The same goes for contouring. In the past, we were actually turning everything to GeoTiff, computing all these things using Gita. So it was taking more time for the pre-processing that actually get in the next. If you know what I mean, run of the data. Because especially with methodological models, when you're doing operational work, sometimes they run them on demand. But usually, they run at fixed period. And it's usually, it can be three hours. Sometimes it can be one hour. Models run for a long time. So it cannot be five minutes. But you know where I'm going. Right. This way, we totally skip the pre-processing, and the data gets right away published. And you can also customize on the fly some of the parameters of transformations. So we also supported WMSEO for complex Earth observation products and their derived product. So it's a way to advertise the structure of the product and also advertise masks, like cloud mask or water masks and the like, which could be a raster vector. The protocol is rather complicated. So we built a UI to help building the product tree, which is layer grouping in GeoServer. But a particular one, particular type. And once you have set it up, you can just preview it as normal. But by default, you will get the browse image of it. And then if you have a client that is EO capable, they will be able to extract also the other products. We can also download the data, of course. We implement WCS 2.0. We have a full implementation of the protocol. The protocol is really, really pluggable. So the basic implementation can just do crops of the data without anything else. But the GeoServer implements also all the other extensions so that you can rescale, reproject, and also control how you are encoding the outputs. So this is a described coverage output, which is, let's say, similar to the get capabilities in WMS. So you get a standard summary at the top, which is a 4D bounding box with space, elevation, and time described in terms of limits. And then the protocol allows for custom sections in the output, and we leverage that to enumerate the time, elevation, and custom dimension values so that you can fully discover them. Of course, this is GeoServer own specific extension. So you will need a custom client to understand it. This is a net CDF extraction. So you can also, if you have ND data, you can have an ND output. In this case, I'm making a 4D bounding box query, and I'm getting out an hypercube of data. And we also have WCS-E0, which adds some extra metadata again similar to the WMS-E0. I'm not going to get into details about that. Simone is now going to talk about some real-world use cases. Quickly. The way we use the image mosaic, it's usually quite sophisticated, and we tend to integrate this with the existing infrastructure. Because one thing is the preprocessing. The other thing is most part of the time, people will have their own catalog information. Don't think about GeoNetwork. I mean, catalog life. It's where they actually store the information, where the data is, what is the information about the data, time, et cetera, et cetera. So that's why there is all those extension points in the mosaic, because you can actually plug custom behavior that relates to your own infrastructure. This is a simple use case. We integrated the, it's actually a vertical application that we did for a client, where they sell access to imagery and processing that they do on top of the imagery. It's Sardete, and it's soil-shipped, and high-speed detection. All the point is that all the vector information and the information about where the raster data is comes from their legacy catalog, and the image mosaic and WFS, et cetera, et cetera, is doing the real work. But relying on the information that is coming from the external system to filter images, and we created a custom Resort Access Manager. So security is customized that takes it to account the rules which they have. It's a relatively light integration. It is actually much deeper integration that was done for a company that sells remote sensing imagery, like a lot of imagery, although we cannot use the name. And it was, I wouldn't say completely right, but the customization is very deep, because they had the information scattered in the legacy system, and it's information they'd be using for ages. So restructuring something, or if you want restructuring anything, was not possible. But we were able to put GeoServer and the image mosaic on top of it using the extension point we talked about before. So the catalog, the information about where the data is, and the metadata counts directly from the storage. Information is filtered according to the user, and the filtering rules, it's coming from them. We didn't customize GeoServer too much. So they're still using Borel S standard GeoServer. We brought a custom store that actually it's used also for WFS, so you can query using WFS information about GeoData. And this is something I would not describe, but it's actually a fully-fledged rewrite of GeoServer for client XXX. It's actually GeoServer. It's split, and it's actually quite sophisticated use case. They basically used GeoServer as a library as opposed to as an application. And I mean, you were talking about petabyte of data, and GeoServer is actually served in directly from where the data is, from the catalogs, and from the real-time data ingest. And that's it. Any questions? I wonder how much you can hand the mic, and then you can add your choice. Hi. I was using the ImageMosaic previously to publish a large amount of satellite data with a large coverage. And the problem with which I experience is once you have the high-resolution data, and you zoom out, it of course has like ingest a whole lot of data. And by that time, it didn't support, as far as I remember, having permits within an ImageMosaic. Is that something you support now? Yeah. But you will still have that problem. I mean, when you use a mosaic, the trade-off is between how many files you need to answer a single request, and how fast you are, even if they are completely optimized. But if you have to open 1,000 files to do a snippet. In that case, you probably would recommend having multiple layers. Yeah, that's what I did, like three different layers or four different. OK, cool. Thanks. Any other question? OK. Thank you, Andrea. Thank you, Simone. All right, we're good. Yeah. Thank you.
The presentation will cover GeoSolutions experience in setting up GeoServer based production systems providing access to earth observation products, with indications of technical challenges, solutions, and deployment suggestion. The presentations will cover topics such as setting up a single unified mosaic from all the available data sources, tailoring access to it to different users, determining the most appropriate stacking order, dealing with multiresolution, different coordinate systems, multiband data, SAR integration, searching for the most appropriate products using a mix of WFS, CSW and so on, serving imagery with high performance WMS and WMTS, performing small and large data extractions with WCS and WPS, closing up with deployment examples and suggestions.
10.5446/20401 (DOI)
I'm going to talk a little bit about how we, let's say, put you a summary of an eight-hour workshop, which I did in four hours already. So two times, if you were at a workshop and you thought it was going to be quick, it's going to be quicker. But the information is available online. So this is actually, let's say, the table of content for the workshop, and you can put it in the workshop for more information. I'm hoping you know where we are, otherwise you can check it, so I won't spend time on this. And I go directly to the content. Basic information, this is the boring information, but it's the information you need to understand because how can you put your server in production if you don't know and you don't understand the release model of GeoServer? Because basically you don't know which is the right version to use in production. So the first thing is to understand the release model for GeoServer is it's time boxed. That means we more or less know when next release will be made. There are always three branches at any time. Development, that might tell you something, the name. You might not want to use development in production unless you really know what you're doing, or you actually have no idea what you're doing if you use it. It's one way or the other. Stable, the name is again pretty obvious and maintenance. What is the difference? Every branch lives for six months. You start as development, then you become stable, then you become maintenance, and then someone has to pay if he wants to release because we're not going to release that anymore. So let's put it this way. If you start now, you might want to look at development for doing testing, so you see the new features, you wait for development to become stable, you test that a little bit, and in production you always want to have stable or maintenance. If you follow what I said, if you don't need new features, you can upgrade once a month and keep using maintenance. GeoServer will upgrade the configuration for you. We usually guarantee that if the jump is not too long, GeoServer will automatically upgrade the configuration. If you jump from, let's say, 2.2 to 2.8, maintenance right now is 2.8, you might have issues because we cannot guarantee that such an older version will upgrade automatically, but if you follow the rules, you won't be asked to update too quickly. As a user, he says, since I don't do development by talk to clients and talk to the DevOps team, release early, release often is extremely nice if you are a developer, if you are someone who wants to maintain GeoServer or another application in production, you don't want to upgrade an update often. Actually, you want to do it only when it's needed. So what I'm telling you, on average, if you don't have any problems, you can do it once a year. That's fine. Don't be scared by native releases. In GeoServer, we do test GeoServer quite a lot. Automatically, we do need testing plus some URL testing every night for the release. It should be the compliance interoperability testing that actually tests the compliance with OTC, but also tests that things are still working, if you know what I mean, by sending requests. Doing unit testing is not enough. You can test the dv.component, they work together, you put them together, everything stops working. That's why you need also URL testing. URL testing means you stand up, you serve, and you send the request, and you check the results. That's something we do every night when we produce a night release. You shouldn't be scared about using night release if they come from the right branch. If you are on maintenance, I'm using 2.85 right now. That is a bug fix that I need tomorrow. Next release, it will be in two months. I can use the night release. That's what we do, at least. Before you start, let's say, talking about tweaks and everything, the first thing you need to understand is actually your data, a little bit about your users, and how they will use your services, and the deployment environment. Because you can test everything in an environment, you put in a different environment, and I know there is Docker, but you will not be able ever to replicate the entire environment. It might be when the host operating system, maybe the security system, it's different, it's a thousand different degrees of freedom. The first thing is to use your brain and understand what you're going to do. Not just look for documentation or look at the logs. The objectives, it's probably too small. You can actually check the song if you know house music. We want to make you have a faster, faster, stronger. It's about scalability, performance, and robustness. It's both pure performance and perceived performance. The difference is that perceived performance, it's in the eye of the users. Performance is something you can measure. Perceived performance is not the same thing. To give you an example about what the difference is, think about tile maps and untiled maps. Leave a question aside. If you take a map and you make it tiled, it will take more time to fill the map with respect to the untiled map. But perceived performance is better. Why? Because the user sees something right away. You remember the effect of the spider? Okay. So you need to care about that as well. Scalability, let's say, being able to cope with good results and increasing loads, performance is simply being fast. The two things are orthogonal. You can be extremely fast, not scalable at all. How do you do that? You give all the available resources to a single request. And you process the request at the time. So you are fast, but you won't scale. You can be extremely scalable and extremely small. You try to use as less resources as possible and put together as many requests as possible. Of course, you want to be fast and scalable. Robustness, well, you don't break under load. Now there is a long list of different things you can do. I will not go into details about all of them. For example, I will not explain a lot about how you can prepare your raster input data, because we have been talking about this for ages. There is a ton of documentation. Basically, the information is that you need to understand your use case, you need to understand what your user wants, and the environment. Many people, most part of the time, they start right away experimenting with format conversions, compression, without having to clear in mind what they are doing. I will give you an example. If you want to optimize access to net CDF and GRIB, if you don't know anything, and you want to optimize blindly, the best thing to do is to not do anything. Why? Because usually, these formats, they are used in environments where the data is relatively small in terms of spatial grids, so they are usually pretty fast. You don't want to spend a lot of time in pre-processing. The one key point to remember when you pre-process data is that it takes time, especially for large raster data. So, doing sophisticated pre-processing might give you best performance, but in certain cases, it might make your data old, which means useless. Think about a trans-perical model, a meteorological model, models, oceanographic models. The key is to actually get the data out as quickly as possible. We have been talking about this in some other presentation. For example, yesterday, in G7, we have done many tweaks in order to allow you to do sophisticated things like wind arrows, contouring, etc., etc. on the fly without doing pre-processing, because the key was to get the data out as quick as possible. Assuming you want to do performance, be careful with the formats that you use. Some of them might not be a good fit for performance. If your format is an ASCII format, that might be problematic. I'll go quick here, but the one thing I can tell you, we tend to use, when we do sophisticated pre-processing, we tend to always use GeoTiff. GeoTiff is like a Swiss knife. It can support many different optimization. You can use Bictiff to break the limit of 4 gigabytes. When you have extensive raster data, you might want to revert to use things like mosaics or PMIs. We could talk for hours about what to use when, and there is no single magic recipe entering up front. If you saw what Andrea talked about, we're actually moving towards making it easier to update large pyramids. The excess granular removal helps you to add data on top of existing data without having to exactly remove the older data. If you know what I'm talking about, you can leave some older data there, GeoTiff will not read it. So there can be overlaps. If you can exploit time dimensions and create special temporal pyramids, it's easier to do updates and add new data. Bector data, similar things. You don't really want to use JML or GeoJ zone as an input format for serving. If we need to explain why, you might need to use a little bit of time to study special databases. Usually, you want to use a format which is for vector data. Easy, sorry, fast when you're going to extract as more portion of the data. Most part of the time, although it might not seem like that, vector data is a structure at least logical in records. What you want to do usually is extract quickly a portion of these records using filters. Filters can be alphanumeric or can be spatial or a combination of the two. Quickly, shapefile is good as long as you don't need complex. If you don't need to run sophisticated or complex queries using alphanumeric attributes, at least in GeoServe, there is no support for in shapefiles on indexing alphanumeric attributes. If you have a 4GB shapefile that you want to use as background layer and you have a complex style that actually style things differently, filtering them depending on alphanumeric attributes, and it's low, then you would want to put this data inside the database because otherwise, every time, we load the entire dataset and filter in memory. For shapefiles, we only have spatial indexing. I know it might seem an obvious thing, but it absent every two months with a client because people had the tendency to put everything on the screen and to render everything at all scale. Of course, that is not the best thing to do. Shapefile is good if you actually know you are going to render everything and you're only going to filter, taking into account the area of interest when spatial filtering. At that point, shapefiles are usually more scalable and faster than spatial databases because the number one problem about spatial database in terms of scalability, not performance, you remember what I said, is that you are limited in the number of connections that you can use. You can make your queries as fast as you want, but on average, you want 20,000 connections available. Let's say we're leaving caching aside for a moment. Think about, for example, a simple map rendered from post-GIS or Oracle. Let's say we rendered these maps in 50 milliseconds. We can do how many requests per second? 20? 220? 20? Okay. So that means if you want to do 20 requests per second, you need at least 20 connections. In most use cases, in enterprise applications, you don't have that many connections because database are shared, you don't have control over them, so you will need to take this into account. Scalability will be an issue. You might be fast, but scalability will be an issue. So you will have to use your connection wisely. In GUSA, correct me if I'm wrong, we try to use two trends for the rendering, one for reading data, one for rendering, why? We could load everything in memory and throw the connection away, but that would give us speed, but not scalability. Remember what I said before, because we put all the resources on a request. So we tend to do this thing. We load data in chunk, so we are relatively fast, but we are scalable. The downside is that we use the collection for a longer time. We don't use the connection for the entire time of the request, but let's say 60%, 70%. It depends a little bit, but that's more or less the idea. So that gives you an idea if you know how fast you are, how many connections you need for a certain throughput that you want to reach. Okay. Of course you need to index your data. If you don't know how, again, you might want to read a little bit how special databases work, but if you need help, if you... I never remember if it's raising or lowering the debug level. You want to go to verbose, log, something like that. G7 will spit out the SQL queries that it's actually sending to the database. So you can get them, put them in your database tool and analyze them. There is a lot of material on how to properly handle connection pooling in an enterprise environment. It might seem simple. It is not that simple, depending on how your environment is structured. There is DPAs in the mix that are actually killing connections and you don't know when. There might be software and hardware appliances that are killing TCP connections when they are idle. We learned it the hard way. So you really want to understand how to configure connections in detail. It's not only about validating them. You always want to validate them, but there are also ways to actually, in background, make sure that they are valid, they are reusable, et cetera, et cetera. Because if they're not, at the very least, you will get errors. In a very bad case, you will have to restart your server because the connection pool will get exhausted. We suggest that it's worth for both shapefile and special databases when you have many, many attributes in your tables and you're not going to use them in your styles, in your serving, et cetera, you might want to leave them aside. That will make the shapefile smaller, but it will also make the server use less memory for the special database connection because when we fetch the data, we will fetch less data. The server should be smart enough for databases to not load attributes that it doesn't need. But as I said, the first optimization is to use your brain, so this is an easy optimization. Can you hear me? Styling-wise, well, the server has many styling capabilities. You want to make maps that are readable by the user, so never try to show too much data. If you have a detailed road network of a nation, you don't want to show it completely when you are to 1 to 1 million or something like that because it would be a black blob. It's wasting CPU and it's also wasting the time of your users because they won't be able to read the map. Pay attention to labeling. Labeling is also expensive because there is a conflict resolution engine going and it's going to do some attempts to place the labels around. We have a new optimization in terms of quality in the server 2.9 which makes for better space allocation because we allocate space for characters as opposed to the bounding box of labels, but it's adding some overhead, so there is a system variable to turn it off if you find that it's slowing down your map production too much. Avoid using too many feature-type styles because the server will allocate multiple rendering surfaces to answer your request, which means it's going to use more memory. If you are using Z-ordering, which is a new ability that we have in just over 2.9, to make sure that underpasses and overpasses and stuff like that are in the right position in the map, also be careful about not doing that at all zoom levels because we have to keep a certain amount of data in memory to go back and forth and do the Z-ordering correctly. Tiling and caching with GWC, tiling is always a very important thing to consider when doing maps because everything that's not changing can be tiled cached and provide a very significant speedup. Plan out what layers you want to tiled cache on, which you don't because why don't you want to tiled cache? Well, because tiled cache takes a lot of disk space, so you might not have enough for everything. GWC is embedded in the server, so that gives a speed boost compared to an external tiled cache. Disk space considerations. So take into account that going up to level 20 might require gigabytes, if not terabytes of storage, so depending on your area, of course. So take that into account. Take into account client side caching, sometimes you can tell the client not to request the same tile over and over for a certain amount of time, and that's sort of the ultimate optimization because the client won't bother the server at all. It's not just not producing the tile, it's not even transferring it, which is great. Choose the right format for your tiles. So normally it's PNG for vector data and JPEG for raster data. We have the new JPEG or PNG if your raster overlays have transparency, so consider using that. There's also vector tiles, it's a community module, but it's very nice in that the vector tiles are more compact and they can be over zoomed. So instead of building your tile cache up to level 20, you can just stop at, I don't know, 16, 17 and then over zoom on the client side because this information is vector. In G-server 2.9 we have embedded support, which is very nice to move around tile caches between servers, so you can actually prepare a tile cache on one server offline and then put it on a production server, which is, again, nice for giving your user a good experience. Consider choosing between when you have a cluster, when you have a cluster you might want to choose between having a single share of the cache, which is probably the best option if you are preceding everything, versus smaller independent caches, local to the hard drive, when you are using disk quota and just caching on demand. The trigger here is not having to write all that much on a shared file system because most shared file systems suffer a lot when you keep on writing and creating new tiles and deleting tiles continuously. Simone? Yeah, quickly, one thing about this, you can actually mix and match independent caches and shared caches because different layers can use different storage, so you might want to use shared caches for layer that don't change at all, like background layers, smaller caches that changes more frequently, the leads will take forever if you put them on a share, on a share, I can do it, on a shared file system, so you might want to consider having them independent, which means duplicating them, so you pay a penalty doing that, but if you compare the time it takes to delete them and create them, you are actually much faster. Okay, assuming you have optimized everything, you want to measure what you have done. This is Andrea explaining me a while ago, it's your friend where you're actually looking at the results, it's two curves, I'm an engineer, I'm not a scientist, so I tend to simplify, if you're not happy with the curves, I know things are a little bit more complex, but I never liked math, okay? It's the throughput, it's the green curve, the one that it's, let's see, almost look at it and it goes down, and the response time. And then the red one is the results optimization systems, again it's a simplification, after they have warmed up, okay, they tend to behave this way, if when you have to warm them up, there might be fluctuations, and there will always be some fluctuations, but I mean, we are simplifying. What you want to do when you optimize, it's actually having the throughput curve grow, it's load, usually it's measured by increasing users, or threads, so you want the throughput to grow, more requests per second, okay, and you want the response curve to stay down as much as possible. At a certain point, you will eat a bottleneck, it can be the CPU, it can be the disk, it can be the network, it can be the memory, it can be your server, because I mean, there are bottlenecks in the software as well, okay, and there is one that we talked about before, and there is some problem with the running, so the objective is actually when you measure, it's to improve the scores. We'll see once you have optimized everything, you want to make sure that the throughput will not fall down, okay, as you reach the maximum utilization, but you will ensure fairness, so if you get more requests and the request that you can cope with, you will start queuing them and rejecting them, instead of trying to serve in all of them and not being able to serve any, because you're not going to be able to serve any more requests, so you need to make sure that you have the resources. If the load keeps increasing after a certain point, things will not get better, we get worse, so we need to make sure we don't reach that situation. There is two ways to put a server in a crisis situation. One is to send a request that are too big, and we have the resource limits, per services, to actually make sure that you have the same time, like in the IOS service attack. Each service has parameters and settings to actually make sure you can limit how much memory we use, how many rows we read, how much data we generate, and for how long a service can run, and this depends on the service. There is settings for WMS, WFS, WCS, and WPS. Andrea showed some enhancement to the WPS once. You can make sure the WPS process doesn't stay in the queue for too much. It's not as simple as that, by the way. The process has to cooperate, because you cannot kill a thread in Java unless you want to kill the entire application. There is a control flow plugin that you can install to make sure to actually do quality of service and throttling, which means if I get more requests than a certain number, I will start to queue the requests instead of trying to serve them, and I can actually do also real quality of service, like saying for these users, I don't want to do this, and my throughput cannot be more than X requests per second. There is no time to explain how that works, but I mean, it's pretty sophisticated. And it actually allows you to do something like this. The blue line, it's actually a, let's say, measured throughput, so you see at a certain time it starts to go down, so what you usually do, use control flow to make sure that when, for example, in this case, your server will have more than 60 requests at the same time. Follow me here is not 60 requests per second. If each request executes in 50 milliseconds, it will be 16 per 20 requests per second. This is the number of requests that are executing. If you have four chords, you will need more time to explain this, but if you have four chords, ideally, you won't be able to execute more than 8, 10 requests per second at the same time. It's not per second. At the same time, like a line of code that are executing. There will be some IO8, so if you have four, it could be 8, because they could switch, they need a connection, they're writing, et cetera, but it will never be 200, okay? Okay, there would be an interesting part at the end. I didn't have time. I'll, yeah, you check the presentation. It's about what you do afterwards when you are in deployment. So that is one thing which I just want to show you. You need to make sure when your server is in production, it's when the fun starts. I had another thing, but I don't remember where that is. You need to be prepared, because everything will fade sooner or later. If it doesn't, nobody's using your services, so keep that in mind. You just need to be prepared for when that happens. So monitoring, logging, and taking a snapshot, and monitoring, et cetera, et cetera. That's it. APPLAUSE Yeah, so maybe one quick question, because I was a bit late. Anyone? Maybe really in the beginning you said that there's no stable release anymore, or did I understand it correctly? There was no stable release for GeoServer? No, there's always a stable release. Yes, but there's no maintenance, or I just didn't understand what you said exactly. Every time there is a development branch, a stable branch, and a maintenance branch, and the release is as time boxed, so you get releases every more or less one or two months, and they scale back. So basically we release one month of the stable, one month of the maintenance, the next month of the stable, and so on, and every six months we switch the development back to stable, the stable back to maintenance, and the old maintenance goes away. OK, then I probably understood your role. Thank you.
The presentation will describe how to setup a production system based on GeoServer from the points of view of performance, availability and security. The suggestions will start covering how a single node GeoServer should be prepared for internet usage, tuning logging, connection pools, security, data and JVM preparation, keeping disk, memory and CPU usage in check within the limits of the available resources. We’ll then move to tools used to monitor the production instances, ranging from probes to request auditing and watch-dogs. Finally the presentation will cover setting up a cluster of server and the strategies for keeping them in synch, from the traditional multi-tier setup (testing vs production) to the systems that need to keep an ever evolving catalog of layers constantly on-line and in synch.
10.5446/20395 (DOI)
So, good morning and welcome. First off, we have Mauro Bartolomielli to talk about mastering GeoServer, security with GeoServer and Geofence. And without further ado, we'll let him get started. Thank you very much. So today we are going to talk about security. The first thing I would like to say is that security is not fun. Security is art. It's very hard to achieve. We are going to look at how GeoServer tries to handle security and all the available features that you can use to make your infrastructure secure when GeoServer is involved. So security is art. And we at Geosolution work to make it simpler for our customers, especially when GeoServer and one of our products is involved in creating an application infrastructure. Geosolutions is my company in the sense that I work for Geosolutions. I'm not the owner. But it was founded in 2006 and is involved in GeoServer developing in every aspect included security stuff. Okay, I will give you an overview of what security is in general and how this is implemented inside the GeoServer infrastructure. Basically when you talk about security, you have to talk about two different aspects. The first one is how you get identity of your users and how you can trust this identity to make sure that people that access your system or services that access your system are trusted to do what they are allowed to do. And they cannot do what they are not allowed to do basically. So this is what we call authentication, getting identity and trusting that identity. The second very important aspect is how we handle authorization, so how we access our resources and how we basically deny access to resources that are not allowed to be accessed by some particular users. Inside GeoServer, these two different aspects are both handled by different subsystems of the generic GeoServer security infrastructure. You can see here a basic schema of the libraries and components that are involved in security. The main block is spring security. GeoServer is fully based on the spring framework in most of its infrastructure and also for security we use a module of a spring that is called the spring security. So each of the components of GeoServer and all the security is basically an extension of the standard spring security components. Then other components are involved like the dispatcher, for example, that is the main entry point for every request that comes to GeoServer. Every request comes to the dispatcher and then the dispatcher decides what to do with that request. So for sure security is really involved when we need to decide, for example, if a request is allowed or not for a particular user. Then we have two different elements that are services and catalog that are the two different types of resources that we can handle inside GeoServer that we can use to decide permissions. So authorization rules for resources inside GeoServer. Services are, for example, standard OGC services like WMS, WFS and so on, but also other kind of services like the REST API that GeoServer has to handle administration stuff. The catalog is about accessing the real data that the GeoServer publishes like workspaces, layers and so on. And this aspect has a particular name that is authorization. So authentication and authorization are the two different aspects that we need to handle when we want to secure our system. And how GeoServer handles both of them, we will see some basic concepts that I will try to explain. The first one that is related to authentication are filter chains and authentication providers. You can see on the right, on the top right of the schema. And then the secure catalog on the bottom right. Secure catalog is simply a wrapper to the catalog that allows to implement security rules on top of the standard catalog. So every request to the catalog is wrapped so that security rules can be applied. But another element that is very important that is called the resource access manager is the component, the pluggable component, because we will see that you can have different implementations inside GeoServer of the same concept like the resource access manager. Resource access manager is the component that handles the security policy. So all the rules that permit or deny access to your resources. Okay, now we will see a little bit more in detail these concepts. For example, what are filter chains? Basically when you send a request to GeoServer, this request has to be recognized by its type, as they say, and can, depending on the type, can be handled by a different set of security rules. Each of them is called the chain of filters. Why? Because basically what you do, which is kind of request, is to apply a set of filters that are simple pieces of code that take the request and apply certain actions to the request itself and then pass it to the following filter. So you have a chain of filters that are all applied for every request. We have different chains because we probably want to apply different security rules to different kinds of requests. For example, we want to handle requests to the administration interface of GeoServer differently from requests that are OGT services goals. For example, we want to use a classic form login for the administration interface, while for accessing WMS service, we want to use a different authentication mechanism. For example, basic authentication or certificates or any other kind of authentication system that you can think of. So we have different chains for different requests. Each chain is a sequence of filters that are applied one after the other to the request to decide authentication stuff. So what is the user? Who is the user that is accessing the system in this particular time with this request? Okay. Here you can see a sample screenshot of what I was talking about. These are samples of a filter that you can use inside your server. Each one is dedicated to a particular aspect of the authentication phase. For example, you have one filter that handles a session. So if you authenticate once and you have a session filter, you don't need to authenticate for every request to the web administration interface, for example. You have other filters for remember BISO to handle cookies or for anonymous access and so on. So many, many, many filters that you can use to handle your authentication. Here is a quick list of the filter that you have by default. Obviously, since the server is completely pluggable, you can add more filters just developing a small class implementing an interface, adding it to the set of libraries of your server and have more filter that you can use inside your infrastructure. In addition to filters that are basically dedicated to fetching credentials from the users or similar stuff, you have authentication providers. So basically authentication is divided into phases. First you get the credentials from your users using various methods like form, like basic authentication or an external system. You have many ways to get credentials that you can configure. Obviously, you can decide which method of authentication you want to configure in the system. Then you need a way to trust the credential that the user has given to be sure that they are associated to an existing user and which permissions, which rules needs to be applied to that particular user. This second phase is handled by authentication providers. There are several examples of authentication providers available in your server. For example, you can use an adapter repository to match credentials with an existing user in the repository or a database, any kind of storage system basically that you can use to match the credentials with the trusted ones. In addition to that, GeoServer has another set of providers that are specifically aimed at associating users with the roles in the system because you can categorize all of your users so that they are divided in groups and roles. Roles are quite important because in the core security system, it's the only entity you can associate permission with. We will see in a moment that to associate authorization to your users, you need to specify with permission each entity has. You cannot do it user by user in the core system. We will see that there are instructions that allow you to associate permission also to the single user. But in the core system, you can only do that with roles. You need to create roles, you need to associate roles to users, and then you can bind permissions to the specific roles that exist. Since roles are important, there is a specific component called role providers that can bind, let's say, roles to users. For these, you can decide which kind of storage, which kind of service you want to use to do this particular task. You can use LDAP as we have seen also for users. You can use databases. There are many options. And it's sustainable, so you can create your own role provider if you need one. Also, GeoServer includes some extensions. They are not part of the standard installation, but there are several extensions that you can install. In addition, some of them are dedicated to security. For example, there is an implementation of the CAS single signal system. Another one that is called outkey is we use it in many cases in the inside GeoSolutions because it's a generic implementation of a token-based authentication. You can use it whenever you have in your infrastructure something generating a token for authentication that probably an expiring token or something like that, that you can use to share authentication between different systems. Outkey is meant to handle this kind of use case. Okay, let's change our topic from authentication to authorization. What is authorization about? It's about giving user and roles permission to do actions or resources, basically. So when a user tries to do a particular action or a particular resource, we need to decide if this is a load or not. And if it is a load, if limits apply to how we access the resource. For example, let's say that we want to do a request in WMS to get a particular layer, a map for a particular layer, and we want to decide how the user can see this particular layer. We can decide that he cannot see at all, so it's completely denied, or it can access it fully, or it can access it but in limited way. For example, it could access only part of this particular layer, only particular region of the world, for example, or only if we make another example with WFS, we can decide that it can access only some attributes of a particular feature type and so on. So we have basically three use cases. We can decide to deny access to allow it or to allow it with limitations, with constraints. The authorization system permits to configure all these aspects using a component that is called the resource access manager. The resource access manager is really an interface that can be implemented by several modules. There is a core module that implements a basic, very simple system. In the basic system, you can only associate permission rules to rules, not directly to users or groups, et cetera, and you can basically decide the permissions for workspace and layer, but only allow and deny. You cannot specify limits, for example. The same you can do for security, for services, so you can decide if you can access the WMS or WFS service or not. Then there are extensions to this basic subsystem that you can use to replace the basic authorization system. One of them is Geofence, a security system developed by Geosolutions internally, but that now is an extension, a community module in real for G7 that you can use. It's fully configurable as its own interface, existing to different forms. One is a standalone application that is external to GeoServer that you can use to configure and to implement the rules, basically, and one that is directly integrated inside GeoServer. It's simpler to use. It uses the same web administration interface. It is not currently all the functionality of the standalone ones, but we are going to make them equivalent in the long term. Another option that is the one that I would like to suggest you for most use cases is that probably neither the basic subsystem or a generic system like Geofence can be applied to your own situation. In all these cases where you probably already have a security infrastructure in your company and you just want to integrate your server in the existing infrastructure, what we usually do and what we suggest is to implement your own version of the resource adjust manager that can apply in a simple way your existing rules that are probably not as generic as Geofence allows you to do, but are very specific to your use cases. If you already have something that already describes these rules, for example, in a database or in an external repository, the simplest way to implement your own authorization system is to write your own version of the resource adjust manager interface. It's quite simple because it is a simple interface where you simply decide for each couple of users and all the existing categories like groups and roles and resource. If you have a couple user and resource, you simply have to decide how the user can access that particular resource. You have several methods in resource adjust manager, each one dedicated to a particular set of resources and you just have to return a description of the permission for the user, like the user is allowed to access the resource or the user is denied. If the user is allowed, you can describe your limits. The limits usually, let's see if I have an example of that, probably not. Basically what you can describe is through an object called access limits are filters to the data. For example, if you have a vector data that you want to filter based on the user that is accessing it, you can express a filter, a simple SQL filter that you usually send to your server to filter your data. In your application, you can set it as a limit directly in your server, in your resource adjust manager in your server so that it is applied automatically to have a request for that particular user. You can also apply spatial filters so each user can see a particular region of the word. You can also apply limits to the number of attributes that are visible to the particular attributes. You can also decide if something is readable or writable. You have many options that you can implement in your resource adjust manager. That's it. I think it's time for questions if you have any. Thank you for the speech. I'm interested to know how can we extend your server in WFAS services so that a user that actually edits a feature so that we can also log the user account that has actually edited this feature for logging. Yes, if you understand my question. Yes. I don't remember the name, but there are some extension points that you can implement to catch particular requests and implement your own logic to, let's say, do something like logging or similar stuff. You can just capture every request and do what you want with it. We usually do it, for example, to handle some security stuff that cannot be handled by the standard resources manager, but I think logging is another use case. Transaction listener, that's the interface. Transaction listener, that's the interface that you can implement. You just implement the interface, compile the module, and install it. Other questions? Thank you, what are the main differences between the default security system and Geofence? The basic security system has many limits in what you can configure. As I said, you can only associate permission to roles, not directly to a single user or a single group, while with Geofence or any custom resources manager, you can also do that. You also cannot do something like mix and match services and resources based permission. You cannot say this layer can be accessed via WMS, but not via WFS. You have permissions only for services and layers, but they are not mixable. They cannot be combined. While with Geofence, you can do something like that. Also with the basic system, you cannot apply filters or limit the number of attributes. You can only say allow or denied. You do not have ways to specify limits inside for this particular layer, for example. We started trying out with Geofence and we found out that you can run Geofence next to GeoServer or you can have Geofence somehow included in GeoServer. What are the main differences? What do we have to look at? What is the recommended way? I should have a quick slide on what. Basically the directly integrated version has some limitation in what you can configure. Basically in the configuration interface. The probes of the real engine is basically the same. Another difference is the direct integrated version directly uses the user subsystem of GeoServer instead of implementing its own because the standalone version has its own database for users and groups while the directly integrated uses standard GeoServer users. The other limitation you will see there in this slide, basically you cannot edit limits, you cannot control rights at the rule level and some user interface stuff basically. Okay, I think we have time for one more question. In Geofence you can specify a special filter. You can specify that on a layer but also per user I think. The actual filter is an intersection between those. We need to request a section between the two special filters. If you have a special filter directly in the filter, in the request and another one in the security system. As far as I know in Geofence you can set up a rule, a special constraint for the layer but then also a user himself can be constrained to a certain special area and then the effective filter. No, I don't think so. You can specify rules that contain special filters but then the rules also specify the user. Okay, so in terms of the rule then the rule can be a special rule based on the layer or on the user. So then is the resulting rule an intersection between those and then also second part of the question, if a feature is bigger than that special part of the rule, what is the right way to only return a result for the allowed area even though the feature itself maybe spans a bigger area? Okay, probably, I didn't understand exactly the question but you probably mean if it's clipped by the, okay, for vector data I don't think so. I think it's a simple intersection to select data. So if it's intersected it will be returned but it will not be clipped in the selected region. That happens for raster layers where the special filter is really a mask on the raster. So if you put restrictions on all the layers and then specific restrictions on all the layers, it's a very simple question. Okay, that's all the time we have. Thank you for a great talk, Mauro. We'll get switched down here. Next question,question inawas grease on screen father?
The presentation will provide an introduction to GeoServer own authentication and authorization subsystems. We’ll cover the supported authentication protocols, such as from basic/digest authentication and CAS support, check through the various identity providers, such as local config files, database tables and LDAP servers, and how it’s possible to combine the various bits in a single comprehensive authentication tool, as well as providing examples of custom authentication plugins for GeoServer, integrating it in a home grown security architecture. We’ll then move on to authorization, describing the GeoServer pluggable authorization mechanism and comparing it with proxy based solution, and check the built in service and data security system, reviewing its benefits and limitations. Finally we’ll explore the advanced authentication provider, GeoFence, explore the levels on integration with GeoServer, from the simple and seamless direct integration to the more sophisticated external setup, and see how it can provide GeoServer with complex authorization rules over data and OGC services, taking into account the current user, OGC request and requested layers to enforce spatial filters and alphanumeric filters, attribute selection as well as cropping raster data to areas of interest.
10.5446/20394 (DOI)
Okay, everybody. Next up we have GV Kozel of Maslowic University. He's here to talk about their database of older buildings and work they do to visualize that data on a web map. Hello, everybody. Thanks for introduction. So before I actually start talking about visualizing indoor data on 2DMap, I would like to introduce myself and the data we are working with. So I work as a system analyst and development leader. We are a team of four developers and we are focused to web maps. And one of our key fields is visualization of indoor data. Yeah, that's probably better. Okay, Maslowic University is quite big university. We have more than 30,000 students, 500 employees. And about 12 years ago, the university started to build a database, let's say, building information model. It was in the time of that boom. And the database is continuously updated till now. So today there is about 150 buildings in the database with indoor features. Primary purpose of the database is facility management. However, we are also trying to use it for orientation or navigation purposes, generally for generating maps for ordinary people. I started to work there three years ago and since then we have built two web map applications using OpenAir 3. And in this presentation, you will see some examples how we are doing it. To give you better picture of our indoor features in the database, we have more than 22,000 rooms there. More number of doors, walls, windows, stairs and so on. Important things are that every indoor feature is polygon or polyline. Actually only stairs are polylines. Every indoor feature is georeferenced and everyone is related to one floor. So every room, every stair, every door is related to one floor where it lies. There is no third dimension on data, no Z-index. This is emulated only by relation to the floor. Actually the similar approach is used, for example, in OpenStreetMap. OpenStreetMap also cares about indoor data and there is the tech called Level and it works actually similarly like this relation to the floor. Okay, so when you are visualizing indoor data on the map, you are basically generating a floor plan. It's everyone knows floor plan. You have probably built a house or buy new flat or something like that. So you probably know it, how it looks like. And if you have the database of features and if you have the relation between features and floor, it's actually very easy to select only features from one floor and to visualize it in the map using your favorite web map server or something like that. The clear limitation of 2DMap is that you are able to see only one floor in one moment. You cannot see two or more floors in one map or in one map window, let's say. So when you are thinking or if you imagine some common interactive web map, the very important thing is that it is able to change the zoom level. So on lower zoom levels, you probably don't want to show indoor data, you are probably showing only boundaries of the buildings. So at this point, actually, user does not care about indoor data. But since or from certain zoom level, you show the user or you show the indoor data to the user and actually in this moment, you need some kind of floor selector. We are calling it like this. And floor selector is a component that user can use for selecting which floor he wants to see in the map. So on higher zoom levels, the floor selector is always there. So I guess still now it's quite easy and straightforward, or I hope so. But in case of floor selector, there is actually a decision you need to make. And the thing is that you need to decide if you want a map related floor selector or building related floor selector. Both approaches are used currently. Map related floor selector actually affects all buildings visible on the map. So the list of options is actually union of all floors of all buildings that are currently visible on the map. This approach is used, for example, by Open Level App. If you don't know it, it's a project that visualizes indoor data from OpenStreetMap. The second type of floor selector is building related. It affects just one building. So options are floors of only one building. And this one is used, for example, by Google Maps Indoor. So on the following slides, I will show you a few use cases. And you will see the differences between these two kinds of visualizations. And finally, I will mention also some trap. So map related floor selector, you can see it on the right side of the screen. There is one floor selector. It's actually over there. Sorry, it's in check, but most of our users speak Czech, so we have it in Czech. Natsamnipod.legi means above ground floor, so enough for lesson of Czech language. So if user selects, he wants to see the second floor, he sees actually he sees the second floor of all buildings in the map. It's nice, but the question is why the user would need to see this? Because actually user is usually interested only in one building. He is probably going to some room or something like that. So usually he needs to see only one building or indoor features of one building. On the other hand, the building related selector is related just to one building. You see the bubble. And only one building is, let's say, active. And the other buildings are, you can see that there are some indoor features, but they are suppressed a little bit. So the user needs to click to another building to activate it, and the floor selector and the bubble will move to another building. So this was the case of solitary buildings. It's quite easy one. But then we have some something like interconnected buildings where you are able to walk from one building to another. In case of Masary University, we have about five complexes like this. The biggest one has about 40 buildings, so it's quite amazing. It's quite labyrinth. So in this case of visualization, it looks quite similar because in case of building related, we want the user to see all floors where he can walk from the current floor. So when the user is in the building A20 in the second floor, he is also able to walk to this corridor building and to the building A19 and to the building A17 and so on. So that's how we actually express interconnectivity of buildings or interconnectivity of floors. Just to be sure, not every building, not every building that stands next to each other are interconnected. In this case, this building C3 is active and it's interconnected with building A3, but it is not interconnected with building called ObjectD. And the user can see this information just from, let's say, from the opacity of these features. Okay, this is quite a specific situation, but not so... You can meet it in real life quite often. Two interconnected buildings are built on a hill shade, on a slope, and you can actually walk from the second floor of building A to the first floor of building B without climbing any single stair. If you have the map-related floor selector, you have a problem because if you show the user floor numbering as one, it's actually a nonsense. If you look to these doors in the red circle, they actually lead to the wall because these floors are not interconnected. But if you have a building-related floor selector and if you have appropriate data, you are able to... You are able to show appropriate interconnected floors. So in this case, building B is the active one and floor one is the selected floor. And you can... There's no problem to showing the floor two of building A because the floor selector is bound only to building B. Okay, so enough about floor selectors and interconnected buildings. Now about connections between floors. Usually you are using things like stairs, ramps, elevators. If you are tough guys, you can use leather or rope or whatever you want. But usually we stay on the left side. So yeah, I will focus on visualization of stairs because ramps and escalators are actually quite similar to the stairs. It's some kind of inclined surface. And visualization of elevator is actually quite easy. There is nothing... I think nothing so special about it. So stairs, scary stairs. In the beginning, as I said that every indoor feature is related to one floor. Also stairs are related to one floor in our database. And the obvious problem is that I can see the stairs on the last but one floor. But there are no stairs in the last floor because no stairs are marked like this. So it's definitely not very good for the user to show him floor plan without stairs. There are at least two things that can be done with this. The first thing is to show the user up or down direction of the stairs. And you should show it on the entrance point of the stair. Not in the middle of the stairs but on the entrance point where he can enter the stairs. So that's quite easy thing you can do. And the second thing, a little bit more complicated you can think you can do is then you can actually split vertically, split the visualization of these stairs. And to show half of the stairs on one floor and second half of the stairs on the other floor. In this case on the floor plan of fourth floor you actually see part of the stairs going from third floor and part of the stairs going to the fifth floor. And on the fifth floor you can see the last part of the stairs going from floor number four. I think this side view is quite or should be quite clear. I hope. Okay, there are also other staircases. For example, minor stairs that are actually not leading to any different floor. For example, like this one. It's generally not so big problem for us as you can see from the picture. And there are also vertically interrupted stairs quite often for example in old buildings or maybe buildings like this. And but it's generally also not so big issue. And those vertically interrupted stairs are actually quite good use case for splitting the stairs splitting the visualization of stairs. Now there are mezzanines. Mezzanine is a floor between two regular floors. And it's actually quite easy to handle it because you just tag the features of mezzanine with different floor number. In this case it's 2.5. And the user will see the 2.5 floor in the floor selector. Multilevel rooms is I think the last specific feature I will talk about. The problem about multilevel step rooms is that it usually takes more floors. And in this part the multilevel room overlaps some other room on the same floor on the floor one. And in the map it can look like this. These pink lines are actually stairs of some big lecture room. And under these stairs there are other rooms let's say in the same floor. But this visualization is for ordinary user is very difficult to read. And I think we really want the user to see it in a better way. So what can we do with this is that we can split the visualization of multilevel rooms into two floors similarly as we did it in case of stairs. This splitting is just in our heads. It's not implemented yet. But I think this could work. The visualization could be quite readable for the user. So this was about visualization of indoor data. And two minutes. So I will finally say just few words about our applications that I mentioned in the beginning. The one is Compass. It's a specialized application for facility management and it's not available to general public so I will skip it quite quickly. Just notice that we have also database of technological devices. And in Compass you are able to see multiple floor plans in more different map windows at one moment or in one moment. This red cross is actually position of mouse cursor. The second one, Moonymap, is quite new one, library. It has simple public API and it can be used in every web page. If you want to see it in action, you can visit this page. There is actually nothing except link to a few examples and it's in check. But if you want to see it in action, I guess it works. We are using client-side rendering in OpenLinear 3. Canvas renderer, loading tile strategy. For some calculations we are using Corsair 2.js. And from the developer point of view, you just need to link our library to call some basic method with some basic options. And as a result, you can see the map probably with some marked room or building or whatever you want. Yeah, because I am running out of time. So I will, this is the last slide. So I hope I gave you some responses or some clues to these questions. I hope I gave you also some things to think about. And so those are just basic points I spoke about. So you should consider the type of floor selector you want to use. You should inform user about interconnected buildings. You should or you can show the up and down direction on stairs and trans points. And you can also consider split visualizations of stairs and multi-level rooms. And the last positive thing from my point of view is that client-side rendering is quite easy to do, quite fast today, sorry. But it was really surprising to me. So I think it's a really good idea to do the client-side rendering because it enables you better interactivity. That's all from me. Thanks for your attention. We have about five minutes for questions for Jamie. Yeah, even straight ahead. Can you go back to the one with the split stairs? Split stairs. Yes. This one, okay. This is just Photoshop. Okay, but it's a theoretical question more than anything. So right now you're using an arrow up and arrow down on the map to convey this stir goes up and down. Right now we are not even doing this. But the thing is, how can you tell the difference? How can a user tell the difference? How can you cartographically convey the message that that's not north and south, that's up and down? How can you convey that third dimension when all we see is north, south, east, west? I think what can you basically do to join the arrow with some simple icon of stairs, for example? Yeah, the stairs are there already. Yeah, in this map there is some drawing of the stairs, but if you put there some real icon like points of interest, it can make sense. Well, I'm trying to give you the response. Yes. We have two more questions coming. Hi. I wanted to ask whether you saw, here I see you have doors and windows and down on the floor plan. Do you use them in database? And if you do, whether, or did you ask if you use them as separate objects in database and whether they have properties like dimensions and stuff and which side the doors open and stuff like this? Yeah, thanks for this question. So the database is, the objects are divided into few different tables. There is table like windows, doors, rooms and so on. And every object may have some basic attributes. I think in case of windows is actually width and height and maybe also the depth of the window. And in case of doors is type of the lock used in these doors and so on. I think basically we are not using these attributes for rendering right now. Yeah, they are used for some computing and so on, but they are probably not used for visualization. Hi. I wanted just to ask why don't you use architectural graphic conventions or standards? Especially for the stairs. Well, we inspired in these drawings. And actually I think we are trying to get quite close to it. Do you have any like specific? I'm an architect, so I'm looking at that and looking at, you know. You are an architect. So it can be similar discussion with you, interesting discussion with you after. I would like to ask you actually what's strange to you in these plans? Okay, thanks. Yeah, that's the thing what the common users know about this. I'm glad to see that so many of you have opinions on this and I think you probably have a very interesting discussion afterwards. Right now I think we should give another round of applause and then get ready for the next presenter. Thanks, so much. Okay. 【applause】
It seems easy. Tag rooms, doors, and other indoor features with level number (or floor or storey), put level selector to the map and show features just from selected level. End of story. But what if there are two buildings A and B connected by passage? And what if these buildings are on a slope and level A1 is on the same height as level B3? And what about mezzanines? Are stairs part of the lower floor or upper floor? And where to show it? Aren't some big lecture rooms stepped? And aren't they also used to take more levels? Masaryk University maintains geospatial database of its own buildings including polygon features like floors, rooms, doors, windows, or walls. It contains more than 200 buildings and 20,000 rooms. Based on the database we are building web maps in OpenLayers 3 for specialized users as well as for students and academic staff. Therefore we have faced similar questions as mentioned above many times and I would like to share our experience.
10.5446/20392 (DOI)
Hello, everyone. This talk was actually in the program there's written by Vicky Vergara. So I want to say hi to Mexico. Probably you are watching now this presentation. And it was supposed to be a very funny presentation. There's a song or something, but I'm not funny. So I make it a little bit different probably. So what did we have in mind? What was the mission of this project? So we wanted to develop an open source library to optimize the fleet of garbage collection trucks for a city in Uruguay. It was Montevideo. And it should be done with phosphor G tools. And I did some optimization project already before. So I don't know exactly who was contacted if it was Steven Woodbridge or me. And we were contacted if it's possible to do this with PGU thing. And so we found this very exciting. We said yes. And then the budget was very low. And we were three people in the team. And so then I slowly disappeared. So mainly the work was done by Vicky and Steve. And they did it really great. And I think it was a very challenging project. There's every talks about sustainable city. You have to consider the environmental impact. And we try to minimize required inputs of energy, the waste output, the pollution. So optimization is a big topic these days. And especially our topic was to optimize the collection of waste. And there's a lot of waste to collect. There's household waste, there's industry waste, dry waste, wet waste, recyclable waste. So there are many, many types. I'm not from this field. I'm not so expert in this. And for garbage collection, we used something called vehicle route planning. And vehicle route planning means you have a fleet of trucks. I think it was about 500, 600 trucks for the city. There were a lot of container locations, I think about 10,000 locations. And we should optimize the schedule and plan good routes. Roots they liked. So actually, VIP looks very simple. This is a simple truck trip. So it starts at the depot. It picks up containers. Then it goes to dump site. And it returns to the depot. So this is how you think it's all easy. But it's a non-pollinomial problem. So it works quite easy for a small amount of locations and small amount of vehicles. But when your fleet increases, it becomes very difficult to solve problem. So there are many variants of VIP. So for example, you can have a capacity. This is then called the CVRP. So the vehicles have a limited carrying capacity. And when they are full, they have to be emptied. Then you have something called VRPMT. One vehicle can do more than one route. So you have multiple trips. And another variation is VRP with time windows. So a certain location has to be visited at a certain time. So in our case, one truck should make as many trips as possible. So you start at the depot. And then you go to pick up containers. And then you go to the dump site. And you go again and pick up containers. And you go to the dump site. And until your working day is over, and then you go to the depot. But we also have trucks with different capacities. So you have small ones and you have big ones. So they are not the same. They don't look the same. And they also have different driving schedules. I remember they also said there's always a truck driving to pick up containers that were forgotten. So it happens that they forget one. And so in our case, all the three variations applied. So the vehicles had limited carrying capacity, obviously. The vehicles can do more than one route. And some of these locations also had certain time windows we had to care about. So our algorithm was something like a CVRPTWMT. Maybe if that exists. So a capacitated vehicle routing problem with time windows and with multiple trips. And that's not all. So unfortunately, there are even more restrictions. For example, a garbage container could be in a street market. Or they cannot make U-turns. So you can't turn on a small street or even a big one. Sometimes they stand on the right side or the other side of the street. And unfortunately, some garbage trucks can only pick up containers from the left side. So this was not all. There are even more restrictions you have in real world cases. For example, there are municipalities. And it's only possible that one truck picks up the containers at one municipality, not from another one. But there are also exceptions for that sometimes. Also, there's a relation between trucks. So some waste can be only picked up by certain trucks. And some containers can be only mounted to certain trucks. So it doesn't necessarily make it more complicated to solve. So sometimes, like the municipalities, it also helps you to, already in the beginning, break your 10,000 containers into smaller groups. And there are others like access restrictions, like certain times you cannot go to the market area, for example. Then you have speed limits, depending on the day, eventually even. And you have the turn restrictions that I mentioned already before. So this is a big disaster. Originally, not so complicated problem turns out to be, so you don't know really how to solve this. And first, we classify the restrictions into some global restrictions, like containers within municipalities, trucks for certain municipalities, and also containers that work for all municipalities, for medical waste. And we made some detailed restrictions, and we limited it to capacity, right and left side pick up, and speed. So how to solve this problem? Of course, so there was some front development, and something good was, it was convenient that they wanted to use OpenStreetMap. And on the front end application, you have to select containers, and you have to select trucks, and select depot and dump site. You have to have an interface to query the database, and to visualize the result, like route on a map, schedules, and handle the restrictions. But this was not our part. Some other company was working on this. So we were only in charge of the back end, VIP solver. And so we tried to solve it like this. So because we all came from the post-crisis field for PGU-Ting background, and because trucks, containers, and locations were stored in a post-crisis database, on top of this, using a C interface, we did an optimization algorithm implementation in C++ that did an initial solution, then run a taboo search, and handles the restrictions. And for that, to evaluate the routes, you need a distance matrix. And for that, we used OSRM, or we decided to use OSRM. Because we thought that's so fast, so we can make even huge distance matrices are no problem. So this project is not so new. So it's not, I don't know exactly when it was, but it was not, I think it was not even last year. It was before last year. It took quite some time. So we used an old version of OSRM. So if somebody sees this video and then says, now this has all changed, then yeah, that's very good. But instead of calling the web API, we thought it's better to use the C++ interface classes directly, because we get better performance. And it was also very, very fast. But it was like, things were changing so often and so frequently. So we were kind of stuck with this version. So the web API is maybe stable, but internally, they make many changes all the time. And we also had problems with the results. So we got strange loops and we got strange routes. For example, we made many small distance requests. And from here, it went to here and then to the red one, which it should not be possible to make a U-turn, because trucks cannot do that. It should actually go around. And we had this issue and others. So we had many issues with these very micro distances, like when you start at the middle of the street and you just go a few meters. Also, we had, of course, problems with OSM data. And so altogether, it's hard to find out where the problem lies. So yeah, this was an implementation almost like taboo search. And the operation is done by having seven different initial solutions. And the optimization was done in terms of trip ordering and minimizing of the cost. Yeah, the important part of this is the cost function. And yeah, you have to match theory with reality. And so there's a lot of mathematical analysis necessary. And it's the part I don't know so well about. So before saying something wrong, I probably better go to the next slide. So let's see if this is working. So this is not really a beautiful map. So it was for debugging purpose. And you can see container locations, so quite a lot, with different types. So the red ones are different type than the green ones. So let's see. And this is an animated image. So we try to, so this is something you usually have in taboo search. You have this insert best pair in clean trip. And this is the initial solution we got. And with this nice animation, you keep developers happy. So this is a complete route. It starts at the depot side. And then it drives to the containers. And it picks up these containers. And then drives to the dump side, which is there, and drives back to the depot. So I'm not sure how easy this is to see, but there's the previous one, the red one. And using some modifications, another solution was the blue one, for example. I will try to show a demo later in the browser. So this is now, also this is a truck with three trips. So it's only a part of the network. So you don't see the dump side. So first it makes a trip the red one, then it makes a green one, and then the blue one. And there's the summary of the time it takes. And then you run an optimization, and the order changes. So after this was completed, so we had many ideas. So we learned a lot of things. The budget was very low. So we had many ideas that couldn't be implemented. And a lot of code that was discarded. So in the beginning, to learn what is really required, it's not so easy to do this from Europe, Japan, North America, this workwise. So you can only do Skype meetings. And so we have a lot of unused code left over. And first we wanted to try a new version of the latest version of OSRAM. And it wasn't possible to upgrade this before this talk. We tried to do that. Then also C++ that was used was quite old. So there are better ways to implement this. And so we want to improve the function library. And also we think it's good to have some open front and application, because the current one is not so fancy. We also think that because we used Postgreskill, we think there's also value for Pugeting. So we already brought some functions, some ideas we're already implemented. The currently available function is the family of thisPoints functions. ThisPoints functions means your locations of your containers are not at the start or the end of a road. They're usually somewhere in between. And you can visit them. So there are some visiting concepts. Like you have to access them from the left side, or you have to access them from the right side. And that you can also start within an edge. We have some ability to modify the graph to include temporary points. So even there is no start point in the middle. Just for this query, it will be added temporary or ready from a table. And if you're interested in this thisPoints function, it's actually quite well documented in the documentation of Pugeting. So this is something we needed for this project. And so it also depends on in which country you live, if left side or right side. And there are a couple of lessons learned. So if you use OSRM to evaluate your routes, your distance matrix, you store your data again somewhere else. On the other side, we have PostgreSQL. And I think everybody would like to not use OSRM and just do it in Pugeting. And we already think about how to do that. That we are not as fast as OSRM, but maybe fast enough to do this all with Pugeting. So computing the distance matrix. And there's the theory. And the theory is, yeah, this is an NP problem. And execution time grows exponentially if you have more locations and more tracks. And there's a lot of high level abstraction when you look at documentation of VIP problems. And there are not so many restrictions. And very often the distances are Euclidean approximation. And this is not possible to use this for real projects. Because in reality, you also have NP problems, but the user wants to have very fast execution time. Users have many restrictions, like many you don't know before. And even restrictions might change from case by case by case. And some of them, they are even implied in how a city looks like. And what I found very important, drivers are humans. So you can't just send them zig-zags through the city. So they also want to be able to drive the route you tell them. So maybe stay on the street, even if it's not the perfect, the best solution. But it's one that is doable. And if we have a little time, I want to open this so that you can see some trucks driving. OK, so here they're coming. And so in this area, they go to each container. And they're going to be able to drive the route you tell them to. And they're going to be able to drive the route you tell them to. They go to each container and empty the containers. And then drive back to the depot, to dump site, to empty the truck, and then go to the depot. Yeah, so if you have a similar problem, so the funding was very low. So I think we have a lot of many lessons learned, many things that could be done better. And there are already other implementations of like Graphhop now has, for example, also some VIP functions. But like we find, there are these mathematical models. And they rarely apply to reality. And customers have very specific demands. And so it's quite complicated to find some generic solution that works for everyone. And there's a lot of code currently, even on GitHub, but it needs more work to make this understandable for everyone. I think the time is off, so if somebody has a question. What is Tableau? Sorry? The reference table, is that an existing product? No, Tableau Search is a mathematical optimization. Like how to say it? It's a pattern, how you try to find the best solution for such a problem. There are others as well, but it's a very common practice. How to start with an initial solution and then try to improve this by minimizing certain parameters. It's quite like how those instances are all pre-computed for such instances. So you have computed the instances after each iteration. Yeah, OSRM is really fast, so you can do that. I don't know how they, so we only delivered this solver. So I don't know in the real application if they store the distances somewhere. So we only, I don't know how the final product looks like. This was only the back end, or even the algorithm part. You can keep it, but if it changes all the time, you have to do it again. But OSRM is really super fast. The problem is that you don't know why things are wrong. And I saw crazy, crazy mistakes. And sometimes they were fixed. And sometimes it was OSRM problem. And sometimes it was data problem. So yeah, it is, I think these short distances are very, it's not so common place. Maybe OSRM is not the right tool for this. OK. Thanks. Daniel. Thank you for the那些.
Garbage collection is a topic for sustainable cities that are moving from picking up on individual houses to pick up garbage stored on containers. Minimizing trucks on the street, minimizing the travel times, while maximizing the number of containers that are picked up are desirable of the routes planned. This kind of problems have different types of constraints, for example, capacity constraints: limited number of trucks and each of different capacity. Some are time constraints, for example, a set of driver might have the morning shift, while some others work the night shift. Some constraints are topology based: a truck can not make a U-turn or an acute turn. This presentation you will learn the concepts behind this kind of optimization problems and how FOSS4G can facilitate finding a solution.
10.5446/20388 (DOI)
Okay, so then I think we are ready to start the session. If everyone are getting in. Okay, so welcome to this session. We have two speakers today. First off we have Peter Priedl to talk about this EPSG.io and projections and coordinate systems. So welcome. Thanks. Good morning everybody. I'm Peter. I'm from a company called Glockan Technologies in Switzerland. And I'm here to show present a system which we have developed for searching in coordinate systems and for basic previews. So what is the EPSG.io? It's a free online full text search. The problem which we were facing was how to search in the coordinate systems through the full text. So if people know text which is in the description of the coordinate systems, how they can identify the exact definition to use it in the software. A typical problem which we had is that we receive data from a country we don't know enough about. And it's missing a special reference system and we can't process the data. This system is designed to solve this problem. It indexes the official EPSG database. So data which are free available at EPSGregistry.org and also S3 database through the GDAL scripts. It's designed for discovery of all the parameters in the EPSG, not only what you need for the special reference system, for the coordinate systems, but also for other attributes which are in the database. And it supports exporting of various data formats. It's an alternative to specialreference.org which was system designed previously. But special reference org didn't have the full text search and it didn't have support for different transformation systems. So those are two main differences. Before I start, I wanted to know who knows what is geodetic.um here. Okay. And who is using specialreference.org? Who has ever used EPSG.io? Okay. Thanks. So, and maybe a few questions more. Do you want to, who's problem is to find coordinate system for data which is missing it? Like, do you have a problem that you receive data which is missing coordinate system and you are searching for the coordinate system? Please raise your hands. Okay. And who is just interested in the coordinate systems in general on the talk or on the searching on the system? Okay. First, the basic use of the system. Like, how do we, how a basic use looks is if you end up on the front page of the EPSG.io, you will see a search box like on Google where you can type country, code or name of the coordinate system. If you do this, and the system automatically provides you with the search results like in a full text search. So, on the left you see the different coordinate systems appearing or in use on the area of the country you have discovered. When you click on one, it's presented on a detail page where you can see a preview of the center of the coordinate. So, you see how many digits are in the given coordinate system. What is the rough, like, range of the coordinates you have available, whether it's projected coordinate system or not. And there are export in different formats available on the site. So, you can get the well-known text as you see here with all the technical information about the system. And there are also exports in ProE4 which you can copy and paste into another external software like open source QGIS or other systems. Or also outputs for MAPNIC or for example, POST GIS with an SQL query where you can add the system into your database so it's supported. The system has also a preview so a basic user can, like, just identify on a map a place. And it gives estimated coordinates in the given coordinate system. So, here we are zooming to our office in Switzerland and above. You see the coordinates in the Swiss coordinate system which you can copy and paste and use for whatever you need. You can switch the maps. It's just like if you are searching for coordinates, it's good to have this sort of reference where you see the coordinates and you see them on the map. This is another tool which is available there for basic use. That's for transformation inside of the web browser where the coordinates you picked on the map or you typed can be transformed into a different coordinate system inside of a web browser. And you have seen switching between angles and now, like, formatting of the degrees. And you can choose different coordinate system to transform it. So, if you are just in front of a web browser and you need to make basic fast transformation from one coordinate system to another, this is pretty handy tool to solve the problem because you can find any coordinate system and you can transform the numbers from one to another in a browser. So, this is like the basic core functionality of the system. But there is more. But before we start, there was, I guess, for a programmer who has no degree in cartography, you need to know a few terms. So, just a short crash course. The coordinate systems are typically either geodetic, which are three-dimensional. Latitude, longitude, or lambda phi typically measured in degrees. That's what most of the people think gives precise location anywhere on Earth. And the other group of the coordinates are projected, which are measured typically in meters or feet and are two-dimensional on a plane. Both of these coordinates define somewhere the origin. And together with the, so somewhere is zero zero. This differs depending on the coordinate system definition. And the way how you define the zero zero and how you adjust the ellipsoid, the approximation of the shape of the Earth, and fit it to the geoid, which you see on the picture, defines the datum. So datum is a way how you, if you have an ellipsoid, somehow deforms fair. You rotate it and fit it to the real terrain you are trying to approximate. And you define a variable zero zero on this ellipsoid. That's the geodetical datum. This is the first step of doing projecting coordinates, where then you define on this ellipsoid, which you rotate it. You define a plane into which you will project. So it's either a plane or cones or other shapes. And then mathematically, you transform the coordinates from the ellipsoid to the plane. So that's like a short, shortly given crash course. I hope the geodets here don't mind. EPSGIO in the advanced search or in the advanced functionality gives you power to discover all these parameters, which are defining datum, which are defining ellipsoids, which are defining projections. Where are the origins on the ellipsoid and where are the origins in the 2D projection? So if you combine all of this together, a software is able to transform the coordinate system to the real location where you know where you are. And therefore, you can transform from one coordinate system to another, choose the open source tools which are behind the system like Pro-4. So an example of the advanced functionality, which is kind of like available in the portal but slightly hidden. On the right side, if you search, are facets. So the system gives you not only coordinate reference systems, which you can filter, just like give me only geodetic coordinate system or give me only projected. You can search also for datums used in the given area or with a given name for the areas which are defined in EPSG. In fact, most of the people think that EPSG number defines just a coordinate system, but in fact, it defines all of these objects. So there is EPSG code for transformation, EPSG code for ellipsoid, EPSG code for unit, for area. All of these have their EPSG numbers. So it's not only coordinate system, which are mostly used by programmers in the computer world when you say, okay, I have EPSG 4326 and it defines what coordinates are your data in. But it's also all the other features in the portal or other objects. Through CPSG, you can discover them all. The other, what you see here on the image is the list of transformations. One EPSG code, like this one 5514, doesn't necessarily tell you the transformation methods. There are more transformation methods between the datums, between the ellipsoids, which can be assigned to a single EPSG code. So EPSG code doesn't necessarily give you the equation for the transformation from one system to another. You must use one of the transformations. In special reference or this was missing. And these may give you wrong results if you apply wrong transformation on the data. You may be shifted by the ellipsoid transformation up to 25 meters or even more on a wrong place. So this is quite an important thing, in fact, to know. If you know that your data are in given EPSG code number, what is the transformation? The transformation are typically, they are three typically used transformation types in the, at least implemented in Pro 4. One of them is three parameter transformation, which is this little three. There are other with seven parameters and there is a grid shift file. The grid shift file is the most precise for the location. Those are just three numbers, seven numbers. A grid shift file is in fact a matrix. So a binary file, which you need to have on your computer. And this defines the transformation in the absolute numbers. But with grid shift files, you can correct local mistakes, for example, made on old maps when people were measuring from one hill to another and doing triangulation. All of these things are not mathematically describable with an equation. You need to have the data, which tells you, okay, here was a certain mistake, which is local and here was another mistake caused by another error during the measuring back in the time. So for this, the grids are important. The EPSGIO gives you a list of all these transformations for the given EPSG code. And in fact, you can click on any of these and those are aligned to the area. If you click on one of these transformations, in fact, the URL will change. And you get the new pro-4 definition and a way how to use the pro-4-based software for the transformation. So in this moment, if we click on Slovakia, you see the area is adjusted. But we have one meter accuracy instead of six meter accuracy, which would be with the three-parameter transformation, just because we switched to the seven parameters. Down on the page, you'll find again the definition of the coordinate system. So you can use it in different systems. So that was another use case or in fact, the reason why we started to work on this together with the full-text search. The other thing which is quite interesting is all these features are in fact clickable. So you can explore link from coordinate system into ellipsoid. So now you are getting on a detailed page of ellipsoid. The URL has changed. And you can study how the ellipsoid is defined, how the sphere is deformed to an ellipsoid, what are the parameters. An alternative way how to use the portal is in fact, let's say you want to see a list of all primary dns. If you look for a green witch and you delete the query and make a query just with kind primary dn, you will get all primary dns, all 14 primary dns which are defined in EPSG database. And you can preview this one and the same to do with all the other types inside of EPSG. Okay, why we created EPSG in the beginning? It's for MapTiler users where MapTiler is a software which creates tiles and map services by pre-processing the raster data. And people when they drop in the file, it's automatically detected from the file itself if it contains coordinates. But if it's not, then they need to search for the coordinate system and identify it. And this interface was the reason why we started to work on EPSG IO. The user interface is in fact using search API on the EPSG IO website, which is documented on GitHub and anybody can use it for his own software or websites. And there are third-party applications already like Open Source QGIS plugin using this EPSG IO API, just doing the same search system in the coordinates. There's also a transformation API on the website, but it's kind of like experimental. It's just calling pro4 on our server. So the same thing what you have seen in the Graphic User Interface on the website for transformation of a pair of coordinate to another pair can be done through calling on HTTP endpoint. Again, document it, but please don't use it heavily. If you want to transform something like you have a list of coordinates you want to transform, I would recommend to use GDAL Transform utility on the command line, which is probably one of the easiest for this particular task. And you can just copy and paste the pro4 definition put in in parentheses and supply the list of coordinates and it gives you the results. So it's quite easy. In fact, if you want to use it in a scripted way like you write Python script or any other scripting language, there is OGR library which is able to do exactly the same transformation. The EPSG IO source code is available on the GitHub. It's completely open source, BSD licensed, powered by Python pro4 OGR, and you need a raw data which are freely available from EPSG registry. In fact, it's prepared to run offline and very easily installed on a notebook, especially if you have a docker. It's just a single command. So once you have docker on your Linux, you just run docker run click on tech EPSG IO and the system will start automatically, and you can use it on local hosts without having access to the internet in an offline environment, including the APIs and the stuff, everything that is on the website. It's a complete copy of the website. If you are not familiar with docker, there is Graphic Use Interface called KiteMatic. So if you download KiteMatic or from docker.com, the Graphic Use Interface just search for EPSG IO and it will run the same way. Installation from the source code is of course possible as well. Future. We would love to bring GritChute files to pro4 transformation, so it would be possible to have high-quality transformation inside of a web browser, and there must be somehow a centralized web point where the GritChute files are stored and pre-processed for web processing. This is something on our roadmap. It would be amazing to work further on processing of the data and join the effort between different open source projects and different implementations of the database, which are now different on different open source projects. So there is, for example, EPSG database copy inside of LibTIV, another EPSG copy through comma-separated value files and sort of hacks in GDAL, another distributed with QGIS and other software tools. It would be great, and this was Ida already once discussed on the OSGO forums to have a single SQLite, a packed database of the EPSG, reusable by different tools, and I would love to work on this with somebody else and with the community to make this happen simply, because I think it will make the upgrades of the EPSG database in all the open source tools easier, and EPSG may be one of them. It would be also amazing to have a system which is versioning the data on top of GitHub, so we have repository where all the EPSG codes are stored and versioned, and you can make upgrades and corrections on GitHub directly with the versions. Recently, S3 published on GitHub, they are four official S3 pro-definition codes, which I would love to merge as well. In this moment, we are using an EPSG IO, the GDAL version, which is quite old, I believe. So the latest version is on GitHub of S3, and it may be incorporated in the portal, and those are the things I would love to really discuss with somebody and work on it further. So thank you a lot. If you are interested, so a demo or advertising for another presentation, on Friday we are going to speak about vector tiles, including vector tiles with custom coordinate system. So from our team, meet us on Friday. Thank you a lot. If there are any questions, we would be glad to answer. Thank you Peter for an excellent presentation. Do we have any questions? It might be slightly off topic, but my first question is, who actually maintains the EPSG codes? Obviously the answer would be EPSG, I think. But connected to that, my second question is, what if the code is wrong? We encountered in the Netherlands for a long time that the parameter was missing one parameter. It seems very difficult to fix that. So I was curious about your ideas on that. Well, the official way is to go choose the official path, submit them emails, and push it to the official channel so it's distributed to everybody. That's exactly why it would be great to have the kind of like GitHub where you can make a pull request. And it's versioned and reviewed by somebody, by a community or accepted, and then distributed to all open source projects. That would be the data EPSG future and that's exactly one of the reasons why to make it. So in this moment you have to officially contact the OGP or EPSG group and submit them the request and then it goes through the formal review and it comes to the next version of EPSG database. I don't know how hard is it, we were not doing this. Yeah, okay. Yeah, we have a little more questions. Hi. While using one of the coordinate system, I couldn't figure out what parameter defines area of validity for certain projection. You see a bounding box. And I couldn't figure out how it is defined, area of validity for... So there are EPSG codes for areas and in fact the areas have even polygons which we don't display in the portal in this moment, which is display basic bounding box. But there are polygons for each area, each area has EPSG code and the EPSG codes of the ellipsoids and coordinate systems projected or geodetic are assigned to the areas. So the code of the area is part of the definition of the coordinate system or the transformation or whatever is in the EPSG. Okay, are there any more questions? Peter? Okay, so we'll give Peter a hand. Thank you.
EPSG.io allows to search in a global database of spatial reference systems, datums, ellipsoids and projections to identify transformation parameters required for a software to correctly handle the geographic location in a known coordinate system. This presentation shows various functions of the search system, and demonstrates how to use it efficiently to discover and identify the right coordinate system, transform the sample coordinates online, pick a position on a map, convert units, etc. It is possible to export definitions of coordinate systems in various formats, including WKT, OGC GML, XML, Proj.4, SQL or JS and directly use these in compatible systems such as Proj4JS and OpenLayers or PostGIS. The whole system is open-source with code on GitHub, and in the background it uses OSGeo Proj4 / OGR for all the transformations and it is powered by the latest EPSG Geodetic Parameter Registry released by IOGP regularly. The open-source tools used in backend could be used called on a command line in batch operations. Ideas for future improvement and cooperation with the community will be discussed.
10.5446/20386 (DOI)
All right. Welcome to another talk. I'm happy that Steven is here. So he's well known while he has been the organizer for Phosphor G. So yeah, he's been in the Phosphor G business for a while. So please welcome Steve. OK. Don't clap. You may be walking out in disgust in a minute. First thing is if you're looking for technology, if you're looking for code, stand up and go now because this is just maps. OK, no one's walked out. So what I want to do is I want to explore the way that we represent on maps political borders disputes. And what I was really interested in was what changes as we've gone from paper maps, which we've had for hundreds, maybe thousands of years, to going on to digital maps. And particularly what happens when those maps are on the web as opposed to in a printed Atlas. So that's what this story is going to be about. This is an ancient map of Europe. It's one of those maps which shows Europe as a queen. It's interesting that Africa is somewhere to the northwest of Europe. You're going to get lots of maps. So a bit of a plage. We all build on other people's work. That's what open source does. So the open source bit is that I built this on the work of a guy called Ethan Morel. I don't know why he keeps disappearing. Ethan Morel wrote this really good paper. If you're interested in this subject, you should go and read this paper. If you've already read the paper, go and have a coffee because there'll be loads of it that you know. So when maps were only available on paper, there was a slow process of change to reflect political reality and the claims of different states to territory. Today, digital mapping has changed everything. Or maybe it hasn't. What we're going to explore is whether digital mapping has really changed very much. And if it has, how does that work? So the question we're asking today is, has the transition to digital maps made any difference to the way cartographers represent boundary disputes? So a little bit about politics and cartography. Cartographers are not recording some form of objective reality. When they record borders, they're recording an assertion. And as Morel says, maps which depict legal borders are never objective, or at least not in an independently verifiable scientific use of the word. And it's important to understand that. Or as I put it more crudely, borders aren't engraved on the ground. I know you'll think that rivers are hard borders, but did you know that rivers move? Did you know that if the border between two countries is along a river and that river moves, you've got a potential dispute between over the land that's now enclosed or not enclosed by the river. If a political border runs through a mountain range, you think mountain ranges are fixed, they're not going to move. But it was fine when both sides said, the land down at sea level, either side of the mountain, is ours. Nobody worried about the land going up the mountain. Who knows where the border is across the top of a mountain range? And actually, I'm going to show you an example of that in a minute. So it's just important to recognize that these are man-made constructs. And cartographers haven't always shown borders on maps. You might think that we've always had borders, but we haven't. Before the rise of nation states, there were no borders on maps. And so look at this. This is an English example. It's 1535. It's a map that was made for Henry VIII. And what you see here is, if you know roughly the geography of England, this is Wales. And what you see there is no borders at all. This is Scotland. And you can sort of sense that it's different. But there's no border shown here. It didn't matter. We knew that was Scotland. And it wasn't part of England. Wales, once Henry VIII became king, was part of his principality. There was no border. Ireland there, or Hibernia as it's called there, is a different country. And you can see it as different. So you see, there weren't those hard lines that we're used to. But let's go to the most famous Atlas of the lot, the first Atlas of the world that we know of, Mercator's map. You've got mountains. You've got rivers. You've got no borders. That's 1569. Mercator publishes this map. No borders. The first borders that we know about are coming just at the beginning of the 19th century and end of the 17th century, the 18th century. And you can see here the borders are starting to appear on this map. So, and that coincides with the consolidation of city-states, inter-nation states throughout Europe. And borders are very much a European thing originally. They only exist in Europe in the late 18th and 19th century. Before that, there were no such things. So let's have a look at an early border dispute. This is Mont Blanc. I want to tell you the story of Mont Blanc quickly. And from the 15th to the 18th century, Mont Blanc was part of the Duchy of Savoy. The Duke of Savoy acquired Sardinia in 1723. And subsequently became a leading force in the Italian unification. Consequently, Savoy became part of Italy because he had moved. He started in Savoy, acquires by Sardinia. I mean, this is great. People buying states. Then Sardinia becomes part of Italy. And he decides that he's made Savoy part of Italy. Now Savoy is part of Italy. How cool, right? French Revolution 1792, the army seizes Savoy. 1796, Sardinia cedes Savoy and Nice back to France. So now Savoy and Nice are part of France again. Here's the border shown on an 1832 map. And it shows Sardinia is down here. Savoy is up there. And that was an agreed summit. They had a summit, and they agreed that the border between the two countries would run through the peak of the Mont Blanc. And that map is annexed to the 1860 treaty that was agreed between Italy and France. But after the Second Italian War, this is the French map in 1865 drawn of Mont Blanc. And you can't see it clearly. So let's pull it up here. There is the border. And all of a sudden, France has pushed it right the way down here into Italy. And then almost at the same time in 1869, the Italians draw a map, and the border's up here. And they're drawing these maps up virtually the same time. And just to make it a little bit clearer, there are the three different border lines. I know that the maps are different scales and everything. This isn't perfect. I used that peak of Mont Blanc as a point just so you could get an idea. I mean, I'm not saying I've got it exact. But you can see people drawing completely different maps of this disputed region at the same time, or relatively the same time. And that was possible because these are paper maps that had a limited distribution. They were produced by state actors. And you produced a map that matched your assertion. Let's have a look at another one. Is this the Persian Gulf, or is it the Arabian Gulf? Well, perhaps to this audience who are, I think, predominantly European or Western, it really doesn't matter what the heck difference. Actually, out there, it makes one shitload of difference, let me tell you. So there we go. In AD 43, this is a very, very early map of the region. And up here, I don't know whether you can see it, but it says Persicom Mare, which means the Persian Sea. And this is the Mare Arabrum, or Arabicom Mare, there you can see. So they are separate things, the Arabian Sea and the Persian Gulf, right? Jump forward to 1548. And that's the Gulf of De Persia, right? That's this map, an Italian map. It's one of the first modernish maps of the area. Just a few years later, and here's a map, and where's it gone? There. It's the Arabian Sea, there. In fact, it's called the Sea of Catef. So what you're seeing is different maps, 1700, Mare El Catef, which is the Sea of El Catef, and it says there, formerly, the Persian Sea. So it's actually, you can see this name is changing, but it's almost the same time here, you've got it's the Gulf of Persia. And my favorite of this, and it's just showing how this has been. And the reason they're arguing about this is because that's Persia, which we now call Iran, and this is the Arabian States. And the Gulf is between the two. And they're arguing, I mean, just if it's not obvious, they've been arguing, these guys have been arguing for 1,500 years about whether this is Arabian or Persian. And occasionally, they actually get very excited about these things and do nasty things to each other. This last one, and I love this one, this is taken from, when I asked people on Twitter, I got phenomenal help from people suggesting things. And somebody sent me this map, which they believe comes from a Saudi Arabian Atlas. We've dated it to between 1948 and 1967. I'm not going to explain how we worked it out, but we know it's in that time zone. And what you can see here is that did say Persian Gulf. It was printed with Persian Gulf. And I don't know whether you can see it on this blow-up, but it's now been handwritten over it to say Arabian Gulf and then published in the Atlas. So they're still fighting over this. So what I want to suggest to you is that on paper, alternative realities can coexist. And one of the first attempts to create a consistent set of world maps was something called the International Map of the World. I don't know whether you've heard of this. It's a late 19th century, early 20th century endeavor to produce relatively high resolution paper maps of the whole world. There's about 1,000, they produced about 1,000 of 2,500 planned map sheets. And what you get is the introduction of dotted lines. And we all know now what a dotted line means. A dotted line means this is a disputed border. But the first ones came out in the 19th century when they recognized that if you were producing a global set of map sheets, you had to recognize that there would be different perceptions of what was a border. So that's one of the first examples of that. And that leads me into the thing that you may know of, which is the digital chart of the world, which is, in some ways, the forerunner to every world web map that we have now. The very early base was digital chart of the world. It was US military mapping, low-grade military mapping, that was put into public domain. It formed the basis for many of the early A&D, tele-Atlas maps that became used on the web. And it had a set of boundaries. And it was the first time that we had a digital world map. And of course, when you've got Google and the various other players starting to produce their mapping applications, they wanted consistent mapping across the world. They were mainly concerned with streets. But of course, when you zoomed out, you needed to show when you moved from France to Germany and all of these things. So they needed borders. And they came from these kind of products. And that's where the problem started. Because now people on either side of the border are seeing the same map. Oh, I'm going to have to go very fast. So there have been loads of examples. And this started with what's called the Google Maps War. I'm going to have to really go fast. Nicaragua and Costa Rica separated by a border that's a river. The river moves. Google doesn't move the border. Nicaragua invades Costa Rica because it says, this is our territory. And it says so on Google Maps. Google freak out because they're not meant to be a cartographer who's making these decisions, say it's not us. So that was the first Google Maps War. And it started something called agnostic cartography. Agnostic cartography are people who make maps without make any assertion as to the disputes that go on in Maps. I've got Google have developed this technique for this process, which I set out there. I'm going to rush through it. But they have a process for trying to work out how to resolve these disputes. And the key thing is that they use something called primary local usage. So if you call this place country A, it doesn't matter what the government of country B says. If all the people who live there call it country A, Google says it's probably country A. The thing about digital maps is you can have multiple versions of the digital truth. And they can be different. So in India, for example, you'll get a $10 million fine if you show the wrong maps. And India's got disputes with about all sorts of areas. And Ken Field pointed out that on your customs declaration, it actually tells you it's the most serious thing that you can do there. They're more important than drugs. So this is Kashmir. That's what you see as a border from India. This is Kashmir from outside. And you see all these dotted lines. Bring them both together. And you just get the thing, right? Google is showing two completely different maps to two different people. India's also got a bit of a problem with China. So this is the Indian version of its northeast border with China. That's the view from outside India. And you can see how the two combine there. So what Google is now doing is it's not saying, this is the border or that's the border. It's saying, if you're here, it's this. If you're there, it might be that. We are not making any claims. One more is Crimea. Prize for anybody here who knows what that says up at the top. I'm not going to read it out for you. This is the Crimean border shown from Russia. And just to, there you go. That brings it up. It's a hard border because that's Ukraine. That's Crimea. It's not Russia. If you look at it in Russia, Crimea is not part of the Ukraine. If you look at it from anywhere else, I don't know whether it's going to, oops, no. It doesn't come up. But there's just a very faint dotted line there because for the rest of the world, we're still maintaining that Crimea is part of Ukraine. Google stays out of these things now. This one is a great one, which is Guyana. What happened was, Guyana is, there's a part of the north of Guyana, which is disputed by Venezuela. So Venezuela says it's been there for hundreds of years. Guyana says, no, it's not. We're an independent country. Guyana names its streets in English for some reason. Venezuela names them in Spanish. On Google, for some reason, and no one's ever explained why, Google shows this coastal road up the northeast in Spanish. Here it's shown on OpenStreetMap, for example, where local people are giving the roads their names in English. OpenStreetMap has a different dispute resolution model. And it's all around on the ground. You'll have to read it because I've got to go too fast. Last one, I'm OK. I'll just get to the end. Jerusalem. There's probably not a more disputed city in the world than Jerusalem. I'm not going to make any comment on the claims of everybody for it, except to say that there's been the most massive edit war about what you call the city. And if you look here, OpenStreetMap lets you create all the tags that you want. So you've got names in English, names in German, names in Arabic, and you've got all these names. And you've got an official name, and you've got another name and another name. And actually, at one stage, the city centroid, which is what will appear the name label is on when you zoom out, the name was being changed hourly because the Palestinian mappers were calling it one thing and saying it was in Palestine. The Israeli mappers were calling it Jerusalem in Israel, and it was just going on. And in the end, OpenStreetMap locked it down. They said, we can't let this go on. And they said that it would have no name until everybody had agreed on what they could call it. So the city centroid has no name. Specialist maps can render any of the other name tags they choose to name, but that's a choice that you make. And they've said that they won't put a name tag back on until there's an agreement. So and they actually did that for East Jerusalem and West Jerusalem name points as well. So there's no favoritism at all. So the question is what's changed? And in my view, states still seek to impose their own version of the truth. Half a minute. Agnostic cartographers present alternative views to a global audience. They're trying to apply process rather than just listen to the people who shout the loudest. And this stuff happens almost instantly. So when there's a dispute, the Crimean thing happened within hours of the invasion almost. There's barely one single definitive version of the truth. And last thing, there are 200 of these disputes at the moment. This is going to go on for a heck of a long time. Thanks for the talk. Are there any questions? All right, we'll get the [?]. Don't ask me a difficult question. I think it's an easy one. I'm just curious. Do you know of any contemporary disputes in Western Europe at the moment? Oh, yes. Do you want me to? I've got a. Count me more than 200. There are a lot. There are a lot. And I think Denmark has them. I've got a Google doc that I created which lists them. There's a Wikipedia site that you can go to. If you look for political border disputes on Wikipedia, you'll find it. But yes, there are many in Europe. And some of them are very longstanding. The oldest border dispute, and I can't remember which one it is, is 600 years old. The average length of a dispute from start to resolution is running at about 90 years. Yeah. I'll give you the link afterwards if you want. There's a difference between disputes and things that are left unresolved, let's say, by agreement. And the Belgium-Dutch border in Bale and Nassau-Bale Hettog is an example of that, where they couldn't figure it out 150 years ago. And they said, well, we'll leave it like that. It's OK. But there are disputes. The one is the dollar between Germany and the Netherlands. We think it's. There is one official place where you can actually have disputes settled, which is the International Court of Justice. And they solve disputes if both parties ask for it. It's basically a court case. I've been doing mapping for them. And it's very fun. I can't talk about any details, of course. But the fun is that then both parties agree we're going to solve this. And they both say, this is our solution. And ask the judge, please tell us which of the solutions or what the solution is. But the emphasis is that both parties agree. And that means that both parties are actually going to resolve the issue through some objective reality, maybe some geo-science, maybe some political science. In most of these disputes, it's historic assertions that overlap each other. In fact, the Nicaragua Costa Rica. That's one that is currently in the International Court. And that is being resolved through geo-physical science. And understanding what happens when rivers move and the flood plain around a river and all of that kind of stuff. But it's physical science that's being used to resolve that. And it's unusual that they've both agreed to that method of resolution. But it is a great example. That's the other one I was talking about. You said rivers change, but actually people change rivers because there's a. You can divert the flow. In that particular case, they tried diverting the flow by digging channels. Or did they? Or did they? Or did they? Exactly. There's more. Yeah, there's more questions. But it's just so it's time to switch the room if you want.
A light hearted look at how digital maps have changed the way that we represent political boundary disputes. At any point in time there are over 200 political boundary disputes. How they are represented on digital maps is in itself highly politicised. This talk will explore: - changes from paper to digital - the politics of digital mapping - the wisdom of the crowd - how some recent disputes have been resolved - possible models of resolution for digital mappers
10.5446/20385 (DOI)
Okay, while we are setting up the shop again, I want to introduce to you Stephen Felton. I met him for the first time earlier this year for a rather English tea somewhere in the middle of London and he is one of the few people who I think have really shown that you can be an open source evangelist and a business savvy person while remaining a nice and approachable person. Quite often it's said that open source and business don't go well together. Well here's a living proof. Okay, thank you. May the first be with you. So it bugs me when people say that open source isn't free because you have to pay for support and services. It bugs me when people complain about bugs in the open source software but don't do anything about it. So this is an involving thought process. I want you to start to think about why people should be paying for free software. As it's an involving thought process, I call it a beta version. So right up front, let me make some confessions to you. I'm an evangelist. I'm a business person, as Mark said. I'm not a code writer. This talk is about evangelism for business within an open source community. So just bear that in mind. Have you ever wondered what makes free, how does free software work? I mean, is it something that you think about? What I want to do is I want to talk to the community today. Usually this talk is addressed primarily to users of open source software. But this is a community of people who make the software, who develop the software. And what I want to do is give you a few thoughts about how you can evangelize to users and to potential users the reason that they should engage with professional open source businesses and how they can move from just being users of free software to contributors to open source software. And I think sometimes in this community, we're suspicious of businesses within open source gear. I think there's a very different attitude within our sister community at Location Tech. And I think in this regard, probably in many other regards as well, we can learn from them as I'm sure they can learn from us. So a couple of questions for you to wake you up because you've been sitting there for a while. Who here works for a company that's sponsored them to come? Company or an organization that's sponsored them to come here? Okay. Who's an individual who's come under their own buck? Okay. Just looking around the room, that tells you something. How many of you work in academia or the public sector? So quite a lot of people working in academia and public center sponsored to come here. Okay. And now how many of you develop or consult with open source geo software? So that's quite a lot of you. Okay. So we're on the right track here. And finally, out of those people, how many actually contribute code? Right. Okay. So this is a code writing audience of people mainly from a geo business background working paid for to be here. So have you ever clicked the donate button? Now this is a question that I would ask to users very frequently and it's a question that you should be asking to users. Do you click the button often, occasionally or never? I gave this talk a while ago and one person put their hand up and said that they clicked on the donate button. I mean, I use QGIS for fun and I donate to the QGIS project because it's such a cool project. And if people write software, somebody needs to find a way to help them to keep on doing it. So what I'm going to talk about here is a little bit about the Phosphor G community. Why I think business is important to the Phosphor G community. How the business models work and many of you may know this and what it means for you and your organization when you're talking to your customers, how you might use this information. So I'm going to start by quoting Paul Ramsey. Quoting Paul Ramsey seems to be part of what goes on at Phosphor G. Paul Ramsey three years ago explained what Phosphor G is and why it works. He said, in FOS you get what you pay for. Everyone gets what you pay for. You get what everyone pays for. Now that's a beautiful summary of Phosphor G that you can use when you're talking to potential clients. There's a key word in each three lines of it, pay. It's not free and that's a message that I think we need to work on. It doesn't mean you can't use it freely but it does mean that somebody has to pay for it. If you've got half an hour, find the slides afterwards, follow the link that I put there, listening to Paul's talk is one of the best half an hours that you can spend in terms of getting away of articulating the message of Phosphor G. So who is this community that we're talking about? Well, it's thousands of people at this event as we've seen at all our other events. But it's also 39 projects, 50 chapters, 10,000 people signed up to over 200 mailing lists, a website that's in 15 languages, 245 businesses registered as solution developers and providers and something like 7,200 people employed in those companies, not of all of whom work on Phosphor G. This is not a small community, it's a big community. I wanted to work out how big this community was. I wanted to get some numbers. Why did I do this? People were asking me why are you doing this survey, Stephen? And the reason I'm doing the survey is when you're sitting in front of a potential client and maybe the opposition is a very, very large proprietary software company with a billion dollars of revenue and X thousand employees, being able to talk about your community and present it as a cohesive whole, even though it's distributed, gives you a very powerful tool in defending against the fud that's fear, uncertainty and doubt that your competitors may well try and use to win business against you. So let me give you... So I conducted this survey, I have to qualify and say I have a moderate number of responses to the survey. I've used some fairly crude stats to scale it up to give me an estimate of the overall OSGO economy. It's very rough numbers. If more organizations who are here contributed to the survey, which would take you five minutes, we'd get more accurate numbers that I could share with everybody. So there are 120 to 150 active companies developing consulting with OSGO software. There are somewhere between 1,200 and 1,500 employees engaged in this activity. There's somewhere between 150 and $200 million worth of revenue associated with this activity. These are quite big numbers. This is a good-sized software business we're talking about. We may be here, I'm a one-man business, somebody else is a 10-man business, boundless is maybe a 100-man business, the biggest. But overall, this is a pretty big organization. So those numbers are worth bearing in mind and they're useful, I believe. Now if you want to contribute, the URL is somewhat easy to remember, Bitly Phosphogy Survey, I'll put the slides up available afterwards so you can go and find it. It would take you two minutes to contribute to the survey. So why is it going on about this? I just want to write code is what you're thinking, but you're staying here, which is kind of you, thank you. So I'm going to quote myself and there was a long debate about something on a mailing list. There was far too much time on OSG and mailing lists and it was about business and stuff like that. So I said, I believe that the long-term sustainability of Phosphogy is at least in part dependent on successful businesses that employ developers, consultants, and support the community in its events. I'd go further. A healthy business community based on Phosphogy is the evidence that we're developing the right products that people need and want to use and it is how we build the community for everybody. So now I've got a dispeller myth and I've got to say thank you to Andrea who she knows that I quote her on this. She wrote, some say that the participants in Phosph communities are in fact compensated for their labor, not with money but with social capital. The not compensated part with money is misleading. We know now that probably 80% of code is contributed by people who are employed by companies to contribute code. That is a great thing. It doesn't mean that there isn't a place for volunteers in our community but it does mean that we need to recognize where most of the code is coming from, a lot of the support is coming from, and we should embrace that. So even though code is contributed by people who are paid for it, some does come from volunteers and one thing we should remember is even the volunteers need pizza. You don't get code sprints with volunteers coming to code sprints without providing accommodation, without providing catering, without providing facilities. The budget for the code sprints I'll come to but it wasn't trivial the amount of money that goes to putting on the code sprints here. So bear that in mind. And then we come to support, which is the backbone of a lot of businesses in open source. And people say you can get support via the mailing list or you can get it on Stack Exchange. Well, I want to say this and excuse me for swearing, but what the fuck are you thinking about customer? If you think you can post on a mailing list at 10 o'clock at night and get support ready for the following morning because your solution is broken. If this is business critical software, then it needs business critical support. And if you were buying it from a proprietary vendor, you would pay the equivalent of the initial purchase cost over the first four years in support. So there is no reason at all why people should not be paying for support. And I think we have a culture which sometimes is reluctant to say you need to pay for that. And it's something that we shouldn't rely on the goodwill of volunteers to solve customers problems. So this talk was entitled, There's No Such Thing as a Free Lunch. So I thought I'd let you know how much a free lunch costs. It costs around 700 euros a person. I know I said 640. It's closer to 700 euros a person to run a conference here in this fantastic venue. Before somebody says, well, we don't need to be in this fantastic venue, you're absolutely right. We don't. It's very nice, but we don't need to be here. But go find a venue that can support 850 or 1,000 people and you'll find that they all charge quite a lot of money. It costs a lot of money to run this venue. The average delegate income of paying delegates at this event is 595 euros. You don't have to be an accountant to see there's a gap. That's before the students come. That's before the bursaries. That's before the various other passes that we give to people and we want to give more passes to more people. And the way we make it work is with sponsorship. Sponsorship this year at Phosphor G is working out at 160 euros a person. That's impressive. Thank you to the sponsors. But it's understanding how this whole thing hangs together. The code sprint was funded by OOSDO and Phosgis. The total bill for the code sprint is over $10,000. This is not cheap. Someone has to pay for open source software or for free software. It's not really free. That's why I say there's no such thing as a free lunch. Quick summary of what pays the bills. You'll know this. What pays the bills is sponsored features, implementation services, etc. You know this. So here are a few thoughts, first of all, for individuals who use Phosphor G. There are two ways you can be an open source citizen. You can contribute your time. You can contribute your money. It's as simple as that. If you're an organization, it's a little bit more complex. And what we need, the large organizations that use our software to do, in my opinion, is to cross the chasm from being a user organization to being a contributing organization. To becoming a contributor. What do I mean by being a contributing organization? Well, fundamentally, quoting Paul Ramsey again, using is not the same as supporting. And also, organization supports open source with time and money. So we're back to time and money for organizations. And here is my five point plan. Take it away with pleasure. Use it, adapt it. Because I think some of this is how we engage with the user community to make them become the supporting contributing community. It's a five point plan. First point, how much should you be contributing? This is a really important question. When you're talking to an organization, how much should they be contributing? And the answer is, it's up to you. But work out your savings. Work out how much you save by using free and open source software. Be very, very conservative on that calculation. And then maybe contribute 5% of your savings, 10% of your savings. Any amount of, any calculation that you do that is based on savings and takes a tiny proportion of those savings starts that organization on a journey to becoming a contributor. You can contribute hard cash. You can donate to projects, sponsor features, sponsor the foundation, sponsor an event. You can contribute time. And an important point to bear in mind for those of us who don't write code is that there is, you can be a contributor to this community without writing code. We need documentation. We need testing. We need feedback. We probably need some UI skills and research. And we certainly need translation. So there are lots of things that you can contribute that don't involve code. And these are things that organizations that use the software may well have a really strong case within their own organization. For example, translating the documentation into Dutch or French if it's not available already. And this is a really simple one. A little bit of acknowledgement goes a long way. Carry out and get your customers to carry out an audit of the software that they're using and then create a page on your website where you just give a credit to all the open source projects that you're using with a link to the projects. That's a great thing to do. Publish an annual open source audit. Just once a year do an audit of what software you're using. Estimate the savings and write a brief one page report. If companies did that, we'd have case studies. They'd make the case internally for contributing money. It's a really good starting point for an organization. If you've got one champion in the organization and they do this once a year, you watch the change. I've seen it happen and organizations have gone when they've seen these figures internally. They said we should contribute. And the final thing, and this is usually addressed to users, so excuse this one, but when you're choosing consultants, if you're a user, you don't want to choose the leeches. The leeches are the people who say, oh, we can advise you on open source software, but actually they're not putting anything back into our community. You want the people who are making the contributions to the community. And we need to encourage people once they've decided to use free and open source software to work with the companies who are actually contributing to the community rather than just leaching off it. So I'm almost finished now and I'm on time. There's a sort of ying and yang of open source balancing between being a user and being a contributor. If you use Phosphogy in your organization, you need to encourage that organization to contribute time. If you're working for a Phosphogy business, you need to be trying to articulate these messages to your customers because it will help you to grow the community, to grow the income, to pay the mortgage and all of that. So my last message to you is think about what you're going to contribute in the next year. May the Phosphogy be with you. It's timing. Just look at that timing because it's awesome. Not a word. Great. No, none whatsoever. Please happily earning their keep. I see a question. Thank you. Hello. My name is Matti Pesua. I come from Finland. A question regarding on how to manage different organizations. If I can collect, let's say for example five organizations from Finland and we want to create a new feature to serve it in open source software, are there some bodies or some organizations that can sort of run the project for us or some models that we can look at so that we don't have to sort of make the whole thing up by ourselves and to invent how to manage the money coming from a few different organizations. I would suggest that most projects have a steering committee. Well, in fact, all projects have a steering committee. If you start with the steering committee, they would be the people who would direct you to one or more of the developers that are contributing code. Probably the route to that is to pick an organization maybe that's geographically close to you or for some other reason. Ultimately, you need to pick an organization who contributes code and say, here's what we need. There's two things you have to do. One, you have to work out how much is it going to cost. Two, you have to actually get that organization to steer the feature set that you're looking for, the enhancement that you're looking through, through whatever the approval process is in that project. Each project is different. The first point to call would be find the project steering committee. The second one is choose one of the contributing organizations. All right. Thanks. Thank you. Thank you.
There is no such thing as a free lunch On being an open source citizen Have you ever wondered? - Why do people write software for nothing? - How do those volunteers earn a living? - How do those companies pay wages? - How much did it cost to put this event on? - Is there really such as thing as a free beer let alone free software? - Is there any obligation on me as a user of open source software to contribute? - How can I contribute to open source if I am not a developer? This talk will explore the open source business model and the motivations of individuals, organisations and businesses that contribute to open source projects. It will hopefully prompt a discussion on what might be reasonably expected of users of open source software.
10.5446/20383 (DOI)
All right. The second speaker is Thomas Holdenness. And he's going to introduce a quite interesting talk, at least from a topic, and it's about a flood watch. So putting together wearables for the people and disaster alert. Please. Thanks so much. And thanks to all of you for coming. I'm really impressed. I think we're up against some lidar discussion right now. So to have this many people in the room is great. So I want to talk to you about flood watch, which was a really cool name that we came up with as we submitted FOS4G. And then we're like, OK, better finish building something to show. But we were really lucky. So previously I was working at the smart infrastructure facility, University of Wallengong, down in Australia. And I had a great summer scholarship student, Hasitha, who came and worked with us for the summer to examine new interfaces to cartographic information during disasters. So just to give you a bit of background to the project that I've been working on and continue to work on now, I've been leading the Peta Jakarta, the Map Jakarta project, which is all about producing a real-time representation of flooding in the city of Jakarta, Indonesia. The city of Jakarta, Indonesia is a fascinating place to understand in terms of climate adaptation because it is a bowl and the city is now below sea level. So you see in the map on the screen to the sea, to the ocean, to the north, when you stand in the street at the harbour, you look up at the fishing fleet. And the difference between the head-tight difference between the ground and the sea is as high as I am here today. The flood wall is eye level with me. So there's the 28 million people that live there, and that's the population of Australia in one mega-city. And all of the water that falls during the monsoon season has to be pumped over the sea wall. So it's a big bowl and it has to be pumped over. Now this year, because of El Niño and then La Niña, there was no dry season. So the monsoon has been running for about 10 months now. And so predicting where the flooding will happen is a function of infrastructure failure. So the flooding happens in a place not because of fluvial conditions, but because the wall collapses or the pump explodes or the electricity supply for the gates is turned off. So developing a probabilistic model of which neighbourhoods will flood in Jakarta is near impossible because you are trying to probabilistic determine which one metre section of concrete wall or levee will collapse at any given time. So what we did is we turned this project on its head and so we crowdsourced the locations of flooding through confirmed reports in real time. So we asked people on Twitter, we asked people on a government application called Clu, which is built by Google, and we asked people on a citizen journalism website called Pasang Matter run by an organisation called Detik.com. We asked all of those people to tell us when it's flooding in their neighbourhood. And so these are all the confirmed reports of people telling us it's flooding this year so far since the version two of the project started in December. And so we collect these in real time and this has been really successful. We've been running this for a couple of years now and it gives people an idea in real time of where flood conditions are. And so the thing that we've been doing this year and I actually talked about this at FOS4G last year is then we've been piping that data into the control room of the emergency management agency and now they take those information, those tweets in real time and then they can delineate the area, the neighbourhood that those tweets are coming from and they look at the photos and then they say, okay, the flood is this high or it's waist high or it's shoulder high. And so then we get a publicly available map that looks like this where we have these individual reports but we also then have the real time heights of water inside people's houses of the current flood condition across the whole city. So red being the highest, 150, all the way down to 70 centimetres which is around about knee-high. And as the informatics section pushes those things out to the Jakarta government, different responses are triggered for different areas depending on the height of the water. So this has been pretty good. And so we've been playing around with this data now for a couple of years and running this project and developing this system. And so one of the things that we were very cautious of or conscious of at the beginning was that we should really, the state is all going to be public but how is the public going to consume it? So it's all very well having a nice API and pushing the data to some system so that you can download it in your GIS but how do we give this data back to the community? And so we worked very hard to build what we call a multi-stable map, a website that allows you to, on your mobile phone, you get zoom to the location just like Google and then you can see all of the reports that are nearby. If you look at the same map on a desktop then you see this city scale overview. And so if you're a government user or an institution then you might want to see the whole city at one time. If you're an individual user then you see the map as displayed at your location with reports around you. And so we have this amazing architecture which this is what happens when you work with architects and designers. They get very excited and they start to draw lines and points and try to explain how your system works. And this unfortunately the resolution hasn't come out so well but what we're seeing here is all of these sources of information coming in from the crowd, people on the ground, being sucked in basically to a big post-GIS database, having a user interface to the government and then we push that out to our API. And the API has been really well used and so we have a number of downstream users, the US government, the Australian government, the World Bank are all consuming this information for their flood models to try and understand why it's flooding. But in this presentation at least we're really concerned ourselves not with how as GIS users we can get that data but actually giving back to the community and working with people who are on the ground. And so information is really critical. You've seen some nice use cases of people using our map to say, oh it's flooding, it's coming so I should collect the kids from school early or I should change my route on the way home or I should leave to, you know, if I'm travelling I should make alternative plans. That's great. But one of the things we're quite conscious of is that I'm a drogopher and so I think about everything in maps but not everybody in the world thinks about everything in maps. And so presenting a flood map to someone, we saw the diagram before with all the lines on, we've probably got too complicated now. We've got all these great layers of information that we've got on our map in real time and as drogophers we get really very excited about that and we're kind of giving ourselves a pat on the back. But then I wonder what the user experience is on the ground of having to open your mobile phone and say, okay I need to orientate myself to the rest of the city which people may not be used to doing that. And then I need to understand what these different layers of information are telling me and what is that criticality. Obviously red tends to mean danger but how dangerous and what does that mean compared to an individual point coming from one person. And so one of the things that we've been thinking about is the evolution of mobile technology now is including wearable technologies and wearable devices. So can we use some of these wearable devices? So instead of having to get out your smartphone and look at a map and interpret the information, we could push alerts directly to users in real time that are geographically related to them but then probably not thinking about it in terms of a map, they're just thinking about it in terms of proximity. And so we created a bit of software. Hasitha created the first prototype last summer called FloodWatch and FloodWatch is very, very simple. It takes a Pebble smartwatch, so it run on any Pebble, it's one type of smartwatch, it run on any Pebble smartwatch and it calls our API and then it does a filter to say, okay get me all of the reports that are within five kilometers of the watch's location which is determined from the user's phone and it displays them in a list. And that's it, that's all we do. And so for us this is very new and then if you click on one of the reports on that list you can open and you can read the text. So this is text that someone has tweeted out to say there is a flood at this location and so then we're pushing that through our whole web of connectivity in our system, pushing that directly to a different user who's probably very nearby within five kilometers but they're just going, oh okay there's flooding over there in that neighborhood. And so we're quite excited about this but I'm also very interested and happy to talk with people after about feedback about what other people think because this is really for us a prototype. This is the first piece of software that we've written that's targeted to one piece of hardware platform. All of our other development has been very much looking at trying to do mobile cross-platform and so we're kind of interested about whether this is the right way to go or not but it's an interesting idea nonetheless. And so the architecture of this system as I said, so the Pebble smartwatch, an interesting piece of hardware for us because it's waterproof or lots of the models that they produce, are waterproof which obviously when you're working in flood conditions is quite important. It's relatively affordable. I would say that it's important to realize that cell phone penetration in Indonesia is 150 percent. So more than half the population have two phones. Many people have a smartphone from the statistics on people using our website. We know that iPhone usage is quite high and so there is obviously there is an issue of it's not going to be affordable for everybody and we're not trying to develop this and roll it out for the all of Jakarta. There are people there that won't be able to afford this type of technology but we know that we do have a user audience who is interested in this type of technology and is able to purchase it and we feel that this is an interesting way to go but again, I'm kind of do we bind ourselves to one piece of hardware. We can talk about that more in a second. And so the way this works is you have Pebble provides you a sandbox JavaScript environment on the mobile phone. And so the phone has this JavaScript environment where you can say listen for data from a specific API and when that data is received, filter it and organize it and push it to the watch. And so we use this really nice library called turf.js by the guys at Matbox. And this is kind of cool. I don't know if anyone else has done this yet. We're actually executing turf on the phone in the sandbox JavaScript environment to do that location filtering in real time. So we thought originally I was thinking well we could we should really push all of our processing back to the database which is what we do with everything else. So we have an Amazon Web Services instance in Singapore which runs our service and we find that works really, really well for us. Even during peak conditions we push all of our geometry processing down to the database and it doesn't break a sweat. It's only once as it's scaled because post just when you've got index on your geometry columns works really, really well. And so then we start off playing with ideas where oh okay so you can say I'm at this location and get me all the points that are on this location. But then one of the issues you have is connection time because in Indonesia and in Jakarta you often struggle with a 4G network. Network drops out. It often reaches capacity. Prayer time. Everyone stands outside the mosque texting their friends and the network just like in the office the network just the bandwidth just drops off. You cannot send an email during that time. It's just peaks and troughs. And so then we start to look at our data and we see that actually because we only show the most recent reports from the last hour on the map the volumes of information that we're pushing out are actually quite low. So within an hour maybe we have maximum during a really peak flood event a couple of hundred reports because it tends to be quite localized. People are texting in fairly real time so we're able to get rid of the reports and we just archive them but we don't display them on the map in real time from say six hours ago. That would be too confusing. So our data volume is quite low. So what we do here is we actually grab all the reports in the whole city and this works really efficiently. It's quite a small data volume and then we just use turf and it processes them very, very, very quickly to say okay just filter them out show me the ones that are around my location. So I think that's kind of an interesting as we start to see some of these more geometry processing stuff going on on kind of client devices as opposed to traditionally doing this in the database. So here's an example. So this was actually at the end of last week just as I was thinking about making my way to bond here at phospho G. We see a localized flooding event. People are sending messages about some inundation that's causing flooding to be on the road. Love this use of emojis inside. I don't know which app this is from. Inside pictures. So that was kind of cool. And so then we see on the right hand side we fire up an emulator so the SDK provides us an emulator and we spoof to the phone's location to be just outside of where this cluster of points is. And we can see those points are then represented on the watch just by listing and ordered by time showing most recent first. How many time plenty. Okay. Great. So few things to think about for the future. So the other thing that I haven't really got my head around yet is we have this so we have this fantastic resource not just of the pins of the people of the points of the reports confirm reports of flooding but we also have these polygons that the government is producing and they're doing that in near real time. So in the control room they're saying, yep, there's some pins and okay, we see and they mark it up and then it's that's pushed out like live straight away. What I haven't got my head around yet is how we take that data and provide a representation to it in this kind of wearable device in terms of if you're in the polygon, should that like be a different user interface experience on the watch than if you're outside the polygon or if you're, can we represent the different levels in some way? So I think there's probably an interesting design question. It would be great to get some students working on this this year of thinking about we're working in a very tightly constrained design environment. The Pebble Watch is very, it's a very, very small processor. It's a very, very small memory footprint. Just about has some colors now. Just about does animation. It's very blocky. So I think, but I think there's some interesting things that we could do in terms of alerting people to the fact that there not only are people telling us floods around nearby, but also that the government has said, okay, now we, we officially delineate this as flooded and all of the actions that come along with that. And most people in the city will understand that once an area is delineated as flooded, then that means that government response of some form is on the way. Great. I think that there's a second thing that I'm also really interested in is through the Pebble SDK between the Pebble and the phone, the communication between the two, we can actually access all of the information about the geolocation provided by the mobile phone, including direction and the accelerometer. So if you're like moving through the city, just like with Google traffic, you're not really bothered if there's a traffic jam behind you, but you are bothered if there's one ahead of you. So I think there's a really exciting prospect there to say, oh, well, instead of just showing the reports in this circle of five kilometers, start to shape that. And so we're still doing stuff geometrically, but we're also then looking ahead and saying, if we're traveling at a certain speed, then really we want to be forecasting out to see what reports are ahead of us in that direction, five, 10, 15 kilometers away, as opposed to just showing me things that I'm going to arrive at in a minute. So how much time is there available to someone who is in a vehicle to be alerted to the fact that flooding is ahead? Also, I want to say with this at the moment, our flood watch is just passive. So it's just, you open it and it's pulling the reports that are nearby. Pebble this year have released their timeline API, which allows you to push alerts to the watch. And so you subscribe to their API and you push them data and then they push it to all users. And so we could start to do a really nice thing now where we could say, OK, so a new polygon has been designated, push that out to all of the Pebble users in Jakarta. Could then filter that by location and show it to, it's in North Jakarta, so only show it to people who are in North Jakarta at that time. Or even you could have a subscription service where you could say, in the configuration for the app, you could say, I want to always be alerted about this area because that's where the school is and that's where my kids are. And so anytime, regardless of where I'm in the city, I always want to get alerts from that region. And so that's kind of interesting how we build that mapping inside, post just to do that filtering in our backend. Not quite sure of yet. And then a couple of challenges. We have now a better development environment than when we first started a few years ago, which is great and our office team is doing really well. And we have a fairly stable internet connection, although it still can be trying at times. And luckily, the office has not yet flooded. The water hasn't quite come in. So we're doing pretty well. But for me, as a massive phosphor G advocate, we built all of this stuff that I've talked about today has all been built with free and open source tech. And that was a real winner for us because we went and had conversations with the government and other NGOs on the ground. And of course, the first question often is how much does it cost? And so we were like, well, it doesn't cost anything. It's free. It's open source. You can take the, if you want to take the source code and run it yourselves, if a different government somewhere else in the world or in Indonesia wants to use it, then they can do and we make the data open as well. And all of the data is archived and is actually recorded in the National Library of Australia. And so everything is open, but now we're starting to work with firmware. We're working with a piece of hardware here. And it's not open. It's a black box. It's closed source. It's not an open source. But I kind of feel that this is an interesting direction for us to move in. But what do we do, what's the challenge for us as a community in terms of all of these devices are now coming out, they're coming onto the market. We want to be using our free and open source tech because otherwise we're just going to be overrun in a deluge of Google and Apple smartwatch apps. So how do I build, how do I start to build open source stuff in this proprietary framework? And I don't have an answer for that yet. The code for the app is open source, but it still runs within this black box environment. We can't see inside that, which is, for me, is a real shame. So I'm going to wrap it up there and say thanks very much for listening. I hope that was interesting. Thanks. First question here. This is really cool. I think if I was the user and I had my smartphone and I had this map, the first thing that I would really want to do is to be able to get from my house to my grandmother's house or to the school and to have routing information say what's the safest route or what's the safest route out of town, that would be really neat. I agree. I think that sounds like an awesome idea. And all of the parts are there now. We've got OpenStreetMap. There is no national map of Indonesia really. So the World Bank funded the humanitarian OpenStreetMap team to make the de facto map which is OpenStreetMap. So every government agency uses OpenStreetMap which is amazing. So that's what everyone's base map is. And so it's all in there. The data is in there. The roads and the direction. So we could do it. I think that that might be beyond what we can do as a small team. But then I think we've got API functionality now. We can push things and pull things. And yeah, I would be really interested in something of people building on top of what we've done. That would be great. So what is the kind of culture of privacy in Jakarta? That's question one. Are there issues? I feel like if this was in the US, people would be like, well, I don't want them knowing where I'm being at all times. I don't want this app to know my location necessarily. But is it the same? Are there issues with that? Yes, to an extent. But they're not perceived in the same way as it is in the West. I think that the use of the fact that we have a very strong collaboration both with the government but also community groups on the ground gives us a level of trust that we've managed to acquire by working with those people. And I think that the open source has been a really strong tool in that discussion where we've said, actually, we don't store your location. Actually, we hash all of the user names so that we can't go back to see who said what. So I think it's a tricky one because actually all of that information is available on Twitter and is available in the Twitter archive. It's just what we're doing with it. And so we're trying, we try as much as possible not to build a database of who said what when. And so we separate the users' name from the tweet, or try to at least as much as possible. It's not always possible. Because we want to reply to users and say thank you for your report. It's a good example. That was a trade-off for us of do we just ignore who the people are or do we reply and say thanks. And so, yeah, I think it's an ongoing conversation. It's something that we try to evaluate as we go along. But we often say, well, the code's there. It's open source. You can see what we're doing with it. The data's open at the end of each Monsoon season. We publish the data online so people can see it. And that's anonymous. But you could just go and search for that string on Twitter anyway. You know what I mean? So it's an interesting one. Yeah. And I guess my second sort of related question is like, have you run into any other issues where a lack of information about flooding area or maybe possibly like a mistake in the reporting has led, has that led to any issues? Yeah. I don't think there have been any mistakes per se from the control room. But one of the things that we have seen is that when so they mark an area is flooded and then if that team is really, really busy with something else happens, then that area can often be left as flooded and there's no time out. There's no expiring. That's human controlled at all times. And so that can sometimes get overlooked, particularly if there's a shift change because that office is manned 24 hours. And so it's actually a human interface element of how we design the software for the control room. Maybe we need a timer that pops up saying, oh, is this area still flooded? But that could also be really annoying, right? If everyone's like, yeah, of course it's still flooded. We're busy doing this thing. Stop annoying me. Stop with pop-up. So no mistakes as such. But then people sort of saying, well, this area has been flooded for 24 hours now. And it's actually clear that that's probably just because that little area has just been overlooked when the new shift has come in and then rechecked everything. Thank you for your valuable representation. And so our team developed three dimension urban innovation simulation system on the web. So I think ProdWatch is for providing current situation, right? So can we apply for our system in the future, the future simulation system? So just to check on your question, you mean if you've got a prediction of where things will happen in the future? Like where? So I've got a little time to hours. Real time, sorry, what? A little time to, a little to hours. So I mean the current to two hours. So like a prediction of things in the future? Oh, yes, right. I think it could be. As I sort of said, I don't think that can be the use case in Jakarta because there can be no prediction or not an accurate one that I have ever seen. And just on the previous talk looked a lot using satellite imagery, which would be great, but the streets, the streets, the width is less than the resolution of currently available satellite imagery. So I think yes, it could, it could do exactly what you're saying if we had a model that allowed us to be predictive like and send out those warnings. And the moment the weather service issues are very coarse warning for rain, heavy rainfall, and also if the gauge is upstream or very high, if those two things occur, then, then sort of alert. So that's, you know, alerts on the scale of half a city, you know, 40 million people. We will use satellite imagery and language and urban special data. Great. Let's talk after that sounds. That's interesting. Hi, thanks for a really interesting talk. I just wanted to get to any interaction between the work that you've been doing and the work on the InnoSafe QGIS plug-in, which I know has been used a lot, well, I think it was developed originally in Indonesia and has been used in Indonesia because, I mean, obviously they're looking at modelling in a predictive way and trying to assess potential impact and, you know, I can see the potential for kind of feedback and interaction between the two projects. Yeah, thanks. So InnoSafe consumes our data as the validation model now. So we are the definitive of where flooding is. And so they consume us and then archive it to do damage assessment predictions and also to check against models not necessarily of where flooding, predictions of where flooding will be, but predictions of what resources are required, which is kind of what InnoSafe does. So, yeah, they have a, they just consume directly from our API and they suck that straight into InnoSafe, as far as I know. One last quick question. Very quick. I was just curious about the price of those watches. Are they very cheap? And is that the reason you chose them instead of smart watches? Yes, we're all academics and a few of us have one. So that tells you about the price point. Sub-100 US dollars. For one of these Peter watches? Yes. All right, so thanks again. Thank you. Thank you. Thank you. Thank you. Thank you.
FloodWatch is a prototype application that provides location-based alerts of flooding on Pebble Smartwatches. Building on the open source flood map PetaJakarta.org, FloodWatch aims to provide residents of Jakarta, Indonesia with time critical alerts of monsoon flooding via their wearable devices. PetaJakarta.org is a real-time flood map which integrates emergency services information, social media, citizen journalism and sensor data to provide real-time situational awareness for both residents and government agencies in Jakarta. Existing disaster maps and mobile alerts require the user to interact with their smartphone device to consume and interpret reports. Alternatively, connecting real-time disaster data to wearable devices for the provision of actionable intelligence "at a glance" reduces disruption to user activity, improving time to response. Furthermore, through the use of predefined locational filters the user is able to register for alerts for specific regions of the city providing a semi-automated user-centric interface to incoming reports of disaster. This presentation examines the motivation behind developing wearable tools for disaster response in the world's fastest growing city, and explores the software's underlying open source geospatial technologies.
10.5446/20380 (DOI)
Hello, welcome to this session. We are now on the mobile business sector and we have two presentations here. First one is NextGIS Mobile from Maxim Luminin and Dimitri Baryshnikov. About Android, SDK and after that, Arne Schubert from Wehrkrupp. About 2-way data binding in mobile applications with Jager. Okay, 20 minutes, 5 minutes questions, then 5 minutes break. Should I use a microphone or something? Hi, my name is Maxim. Just a quick show of hands who heard of our mobile application. All right, so I can just say everything. Nobody heard anything about it. It's me. I'm going to give you an overview of the application and I'm going to do some... I'm not going to list all the functions because there are just too many. So I'm going to stick to some use cases, both technological and operational that you might find relatable. So NextGIS Mobile is an application written in Java. It's only for Android. Its primary purposes are data visualization, collection and editing. It's 100% open source, needless to say, so you can find it on GitHub. Everything you can do is open source applications. There were already a few talks about different applications which are sort of in the same field. It would be really interesting to see, to know what you guys think about how they compare. We are actually not just developing this particular application. We are trying to develop a full integrated platform of things with its own core web server, web geodata server, sort of. And then trying to make it in an integrated way so that every piece of this platform knows about each other and able to sort of send data around. But this is kind of important for us. But this application also can be used really by itself. So there is no pressure for you to use any other pieces of infrastructure for most of the functionality. So there are a few particular things that actually make it different from a whole suite of other geodata-related or geospatial applications that are out there. And I'm going to just list some of them. First one is it's geared towards working with multiple layers. So indeed, there is a lot of this talk about don't make applications that have many layers. But in practice, we always find ourselves in a situation where we do need multiple layers. If I'm working, for example, collecting tiger data, I do need tiger tracks. I want to see highways, buildings, and whatever forest loss layers. So I do need multiple layers. And this is what our application handles quite well, I think. And it does add up complexity in terms of how you render things. Another thing is it understands all kinds of, what most kinds of geometries. So it can work with points, lines, polygons, and also multiple multi-features. So multi-lines, multiple points, multiple polygons. So sort of important, we try to make it as easy, as close to your actual GIS as possible. And you can work with your data. This is not the application for some data which is prepared for you in advance, and then you use it for navigation or something. This is the application where you get some data from your main GIS and put it on your mobile, almost unprepared. There is a caveat, of course, there are things you might want to do with the data to put it on mobile. Also quite important for us, it's everything that can be edited, is editable in this application. We don't edit rasters, but we do have functionality already to edit vectors in all type of geometries. We also do some checking for topological errors while they're editing, so it's pretty advanced. You can also create layers, write in the application, and set up the data structure and things like that. I'm going to give you a few examples illustrating some particular, I think, nice features of this application. The first one is a convenient data entry. Of course, we have some data on our mobile. We expect it to be able to edit it. We can edit geometries. How about attributes? The really easy one is open your data table and start filling in values. This is okay. It works. It will do your job. What if you have unskilled surveyors in the field that need to be helped to enter your data correctly, to not fall into a situation where you have lots of data that you need to clean up afterwards? We have a companion to this application called FormBuilder. FormBuilder is an application with which you build form for your mobile that renders it exactly the way you see it on this. It's a desktop application. Basically one of the cases we worked on is an advertisement company that is checking the build boards if they are up there and they created this survey form which is, I would say, better, at least a little bit better than just a list of fields. You can also add photos. This will also be possible to extract from your application after that. The next very typical case for us, it's a collaborative mapping. And database integration. As I said, it's already quite advanced in terms of what you can edit with geometry. In this example, for example, we can see that there is donut-shaped polygons. The polygons with some advanced features, not just a square. The collaborative data collection workflow looks like this. First, you define your structure. What are your fields? What type they are? You upload this to a database to create a layer. Out of this structure, it can be empty and can be preceded with some data. Then you access it with this layer from multiple devices, from this mobile application, start collecting your data, data gets synchronized and available on all other devices for all other surveyors at the same time. Well, here when it gets sort of connected, one side it gets connected with our other stuff we are developing. This is called nex.js.com. This is a place where you can actually throw your data structure, first thing, get it back, connect to it, get it back, and start sort of mapping stuff and have it in your database in a way that it's paperless, like 100% paperless. Another thing we've been working quite a bit, and this is like a new feature of this 2.4 release that we've got issued just a few weeks ago. This is working with additional map sources. So another example is like we're working with Russian Academy of Sciences, some project that is collecting a lot of wolf data. They do have handy form, but they also want to have specific, very special base map that they work on because they are in some remote areas. They need some special rendering and stuff. And here is where another thing that we developed come in handy. This is called Quickmap Services. If you're working with QGIS, you might have seen it in QGIS. This is a catalog with API now, and you can publish some service that you have somewhere. So these guys, for example, this is called Open Topo Map, and this is a service, and it is published. And you can Google, search for something, and then find it, and one click get it in your GIS. So we see it as something quite new, and it's no more loading layers and finding some data. So working with services like that is pretty handy. So there is this new feature in Next.js Mobile, it's called something we call Geo Services. So you connect to the same API, you say I want satellite data or I want something else, and then it lists through the list of services, filtering it out, and then you can have different, different kind of stuff in your mobile without dealing with files, actually, right? Only dealing with services. Here's just some examples, this is Map Surfer, this is something called Sputnik, and this is something where you have multiple services overlaid, this is Strava data overlaid on some grayscale, OpenStreetMap backdrop, whatever. So this is all pretty useful. So another thing these guys are really interested in is working offline. As I said, if you work with this vector data, it's already synced, so it is possible, so you don't have to be online to get synced, so your data to get synchronized, you're coming back to your office, there is internet, it gets synced, but I'm going to show you a couple examples of working offline with Rustries. Let's say you have a full project already set up and then everything is in it, right? And then you have multiple layers, you have multiple styles for each layer, for different scales, things like that, right? So we do have some plug-in, this is called Qtiles, and if you are working in QGIS, you are in luck because you can say Qtiles, render it into some cache format that can be uploaded to the mobile, and here you have your whole project sort of pre-rendered for you to use offline when you are offline in the field somewhere. So these are guys from some protected area, naturally there is no internet there. And so they are using it this way. Another feature of working offline is caching, right? So you can have your service connected and then you can cache it for selecting zoom levels to download to work offline. All right, and the final slide, final case is custom application. So Next.js mobile is not just an application, it's an SDK. So it consists of few parts, and if you are a developer, you might be looking for something like this. Each part has a different repo on GitHub. So there are three parts, MapLip, MapLip UI, and GIS app. So the main part is MapLip, and this is where all the classes are for data storage, map itself, all kinds of utilities, network, and things like that. Then there is a UI which hosts all the classes for dialects and forms. And finally, everything you see on a mobile application itself is called GIS app, right? So this is the main app, it's actually a sort of a wrapper around these two main libraries. And using these libraries, you can do, or we can do, pretty nice things. So we just give you a few examples. So we've been, for example, working with this company called CompuLink, and they are building hundreds of kilometers of fiber optics throughout the country, and they need a solution in the field where people would register every drill hole they made, and all the checking for all kinds of infrastructures they used for building these pipelines, and they are using custom version of an XJS mobile to actually make it happen, because most of the features are already there, and for them, it's mostly redoing the front end of the application. And as an example, we are working with this illegal logging, like forest violation stuff with WWF, and also there are very complex forms for counting trees in the forest and different kinds of stuff. It's already out there and helping protect our forests. Few slides about future plans. We are in process of switching from pure Java to NDA C++. We are also building our new application on GDAL2, and it will hopefully bring us all kinds of things like access to hundreds of formats that you will be able to upload. Also in process of, it's already sort of operational, but it's not in a trunk yet, or it's in a trunk but not on an application of optimizing the rendering using OpenGLES, and the preliminary results are quite nice, so we have multiple dozen times faster rendering compared to current solution. Hopefully, this will also give us more flexibility in terms of working on our iOS version of the application, because they will use the same rendering engine and the same library. Some more future plans. We are working on GDAL2 to improve TMS driver to support all these different kinds of services and also it also involves some better cache management. There will be a talk tomorrow about something called Borsh. This is a build system that is used for building GIS components, GIS libraries of various sorts, and it is also used to build a cross-platform, our Android application, a future one will actually be built with this library as well. Another thing we are trying to do is improving sending data to mobile without having to actually shuffle files around. That is it. Please try it. Let us know what do you think. Here is the link. This is official link. You can find it on GitHub. You can find it on Google Play. Let us know. Thank you. Questions? No question. One question. I am not sure that is the mobile has great GIS, but when logging the point was going on moving point, it is definitely snapped to the nearest feature. The current version does not do that. When you edit on one of these slides, it actually shows when you edit, you have access to all the nodes. So you can actually edit particular node as well. But maybe Dima have something to add. This link is planned for next version because GDAL gives us this functionality. Another question. How do you store these features that you edit there? How is it stored and where do you put it back? Is it in the GU database? It is stored in the mobile application in the SQL database. It is not a special database because Android does not let us use this functionality. But in the next version we will use it. Now we play with the Geo package, but maybe a special light. We will choose. Now Geo package is preferable. In the server or in desktop application, there are several ways. For desktop application you can export to Geo JSON. And for server, it is a talk via REST API. Generally, it is export and import format is Geo JSON. Another question. How many people knew about this? Two. How many will download it? Oh nice. Okay, thank you very much. That is all.
NextGIS Mobile is an open-source SDK for developing mobile applications and a reference mobile GIS application. It is also accompanied by a set of tools for building custom forms and transfering data between mobile and other software. It was first presented at FOSS4G 2015 and after a year of development undergone considerable changes and improvement. We will review improvements of SDK and application and talk about development based on libraries it provides, case studies and challenges.
10.5446/20379 (DOI)
Hello, everyone. Thanks for coming. Please welcome, sorry, it's Robert Norden. Please welcome Robert Norden from Norcat from Norway who will talk to us about the Scarship program that they were on at Norcat since 2010. Thank you. Thanks. It's great to see you all here. I realize I might have some prestigious competitors in other rooms. I admit the town was a slightly scary name for you, but I hope we'll all see it light at the end of the town by the time we're done here. So I'm going to talk to you about the summer job program we run at Norcat and how we use it on open source in the program. For those of you who were here before lunch for Alexander's talk, you probably know a bit about Norcat already, but for anyone who wasn't, I run to it quickly. People ask me at Phosphor G, so this Norcat place where you work, what is that? I kind of go, you know, it's just Norway's biggest proprietary software vendor for GIS kind of, but we're a lot more than that now. We've made some big changes over the last few years. The Tornium office is leading the charge there, really working on open data, open source, but I also think that the summer job program we've been running contributes to that change. So this is an actual quote from the end of summer presentation that myself and my friend gave back in 2010. You know, Barcelona was where we had Phosphor G, Norway had the Eurovision, Germany one. I still thought that Minocca N900 was the future of smartphones, and very few people in Norcat really knew that much about this here, open source software really cared about it, because, you know, Norcat had been doing proprietary software for 30 years and was doing fine, thank you very much. Summer job process was quite ad hoc as well. So basically, I had two years left at university, and I sent in an unsolicited application, you know, do you have a summer job? Question mark. And my friend basically just talked to the man who'd become a boss. And he took a chance on us. We were hired for the summer and more or less just given a goal and then just, you know, get on with it. We'll see you in six weeks. I wrote a program that converted our proprietary startups to SMD, and my friend was working on how to store our data in post-JRS. So, you know, it was fun. We had a presentation show there for work, and we did a little educating on Phosphor G while we were at it, as you can see. And yeah. So this year, not so much from, no, we needed to be educated on Phosphor G at the ending presentation. Some people had to be sent out to get more chairs for the meeting room, because too many people had showed up to see it. And my friend and I were finished at university in 2012. We were both hired by Norcat. And since then I've always stuck close to the summer job program, because I like to pretend that I'm still young, hip and cool. Now we have an actual process with job adverts, interviews, mentors, follow-ups. So this year, I was helping to coordinate nine students in three office locations with a two-day gathering at start and one frankly nerve-racking Skype conference with the entire company at the end. It's great fun though. Now summer job projects are first and foremost learning experiences. And you have to treat them as that, because the way you try out new ideas, make prototypes, and get to know people. And if the result is a prototype that dazzles or a near-finished project, then great. If the result is proof that that approach just doesn't work, well that's also great. Because then you're going to have to fight for research internally, spin off a big project with full-time employees, and then find out it doesn't work. And no matter what, you'll gain knowledge and fresh impulses from bright young minds, and it can be quite fun too. Try out Trump interviews. Now that's something that Matt Mungweg, who started WordPress, has said. The company started Auto Matt Ich. They basically, anytime they're considering hiring anybody, they pay them to work on a project for them for a few weeks. And then they see how that turns out before they decide whether they want to hire them or not. So this is the second major factor in our summer job approach. In many ways, the summer job is actually a six-week long interview where we're considering you for potential positions two years from now. Because after all, the cost of one summer job project is almost nothing compared to the cost of hiring someone who doesn't work out. So if we're impressed by your work, you know, we'd follow you with interest to a university. And if we have an opening on the time of graduating, or for that matter, if you go to work somewhere else, and we have an opening later, you know, we remember who you are, and we'll stay in touch. And then you can maybe get an offer. And it goes both ways, of course, because, you know, students get to know us when they work for us as well. So we've had both ways. I mean, some students, you know, jump on the offer once we give it. Other students, they've said, you know, thanks, but no thanks. They had a nice time working for you. But I think I want something bigger, corporate, more proprietary. I don't know. I myself, I mean, I worked one summer for Norocart, and then I worked one summer for Taylor Noro, you know, big global telecom company. They had a lot more swanky offices, but not quite the same personal touch. So I came back to Norocart. Either way, whether you decide you like it or not, whether we decide we like you or not, everybody's a winner when you get to know each other better, because nobody makes costly mistakes. So what else do students get out of it? Well, obviously, we pay them money, which is nice. But also, students at Norway's biggest and then my totally unbiased opinion as a former student, best university for technical sciences, they required to have 12 weeks of work experience before they can write the master thesis. These six of those weeks have to be demonstrably relevant to their thesis. We can also offer to work with them on their thesis if they're exploring something that aligns with our interests. So the formalities are taken care of. You know, that's not really the important thing for me. Personally, I felt when I had a summer job that I learned more during those six weeks of summer than during a whole semester at university, you know, more about what it's like to actually work for a living, more about my craft as a programmer, and a lot more about reading OGC standards documents than ever thought I needed. Also, I generally think that summer students have a lot of fun when they spend the week, the weeks working and learning and not having exams at the end. So let's talk about the kinds of projects that make the best summer projects. We've tried out a few things over the years. Two main criteria are one that they have to be achievable within the time frame, which is typically six weeks. And the second is that they have to be independent of other projects. First one is obvious. They only have six weeks of summer, so you know, it has to fit. The other one isn't quite as obvious, but it's quite simply that you can't do any work in the summer project that any other serious projects will depend on, because sometimes some projects just don't work out. You know, if it doesn't work out, you have to be ready to just throw it away. And the other, on the other side, equally as important is summer job projects can't depend on other projects getting stuff done, because it's the summer. People on holidays, things move slowly. Nobody knows who's responsible for what when that person's away. When you're doing a summer job, you have to be able to work independently without waiting for others. So the best summer projects are prototypes or single-use applications, you know, short and sweet and defined purpose. We've had a lot of success setting one kind of, you know, one time internal tasks that need doing like I did with S&D conversion. And, you know, if they have time to spare, then they can spend that time polishing the project, making it nicer and better, rather than trying to put them on anything else just because they have time left over. And at any rate, as I'm sure you all know from experience, projects expand filled available time. So I've never experienced summer students not having anything to do at last week. Some of you might be thinking that it sounds like a lot of work to manage a student or two for the summer, but it isn't really. Because since the projects are designed to be both independent and expendable, we can give the students a lot of freedom in how they do it. You know, they get a mentor, they get a goal for the project, and sometimes we give them a few guidelines about how we want it done. But generally, we just turn the blues for the summer. This goes back to the tryout idea because letting them succeed or fail on their own terms is a strong indication of, you know, how are they smart? Can they get things done? How will they work out in the future? Now, I wouldn't be standing here talking to you about this if all it did was act as an extended interview. We've gotten some pretty good stuff after the summer job projects up to the years. I mean, back in my day, we were working on converting SLD or two SLD and seeing how this post-GIS thing works. And two years after that, we were serving a billion tiles a year on the stack with post-GIS map, sorry, geo-server and map proxy. So that was a success. A few years after that, a student used leaflet, the prototype web-based version of an Android and iOS app that we had for municipal maps. And today, the evolution of that map, of that prototype, is used by over half, not almost half of Norwegian municipalities. So over 200 Norwegian municipalities are using that map every day. In one of our more ambitious summer projects, which Alexander was also talking about during his talk, when Norwegian cadastral data was opened, we set a group of students working on that in cooperation with local newspaper. And they produced a platform for integrating that data into the local newspaper's internet sites. Basically, who's bought what for how much? And everybody loves that kind of gossip, you know? It's great for local papers. And we were the first on the market with that idea, so we actually sold quite a bit of it. I did a rough estimation in my head about exactly how much we got back. And for every corner we spent on the summer students and a bit of marketing and stuff like that, I think we got 10 back. So in a way, it's a bit like venture capital. You have lots of projects. Some of them don't work at all. Some of them are okay. And some of them just pay off spectacularly and allow you to spend more money on other projects. Also, another nice effort is that it gives us something to boast about. Obviously, I'm here now talking to you about this. But I was also at Phosafety last year talking about our work for putting the region data sets into Mapbox RexTides. And I was doing that largely off the back of the work that our summer students did. And basically, I took a lot of credit. That was fun. I'm kidding. I did say that we used summer students for it. You know, we blog about it. We talk about it. We think it's good PR for us. So we like to spread the word. Of course, we think we've done good. But, you know, another way of knowing that you aren't something good, that's when competitors start doing suspiciously similar things. The project with the 10 times return on investment I was talking about, half a year later, one of our competitors came out with a product that was so functionally similar that, you know, we could only be flattered. If we were Oracle, we probably would have had some other emotion. But we were flattered. This being, we saw the same company put out a job ad work for a summer internship, which I have to say reminded me a lot of some job ad works I'd been writing lately. And also, it came completely with promises that candidates would be allowed to work with leaflet, post-gis, and user. Speaking of these technologies, I mean, why is it that our summer projects end up using large amounts of open source software, especially considering that all our traditional software suits are just, you know, proprietary and big and massive? Two, sometimes, you know, the goal is explicitly explore this technology. And then, you know, they go and explore it. But mainly, it's just because our students have a lot of freedom. We say, this is the goal we want to achieve. You figure out how to achieve it. And then they choose open source because open source is what they know. It's what gets them the results they need and it gets them the results they need in the short time span they have. Because, you know, negotiating for some proprietary software package probably takes like three weeks sometimes and then half your summer is gone. Also, who wants to spend money on summer projects apart from what we're paying them? But most of the time, they never even consider anything that isn't open source because the very idea of it is alien to them, as I imagine it is to many of you in this room. Talking back to some other things that Alexander mentioned in his talk, how have, I mean, how has this transition been going and how have all these projects of open source tech affected our company? Well, as I said earlier, it's helped us make some good hires. And not all of those good hires have been people who have been in the summer job program. Some of them are people who were interested in open source and, you know, had heard that we had started turning the boat around towards open source and they talked to people who had been to the summer job program and Bob's your uncle, they decided to come work for us and they were helping push the change even further. Also, you know, this culture shift is not just, you know, a field good thing. I mean, we go to the foster G, we sponsor foster G in a way we, we start using open source tools when we know they're the best tools instead of, you know, proprietary things. But I'd say that it also helps us with new revenue streams, as Alexander was talking about earlier. This contact with the Norwegian Army, the military geographic service, which was developed and published in the open. My colleague tomorrow, colleague Otler is talking in the fireplace room tomorrow about the project we did for government which is also developed entirely openly and published as open source. And I have to say, I mean, I was in that company in 2010, that would have been pretty unthinkable back then to do anything like that. Finally, I want to say this, you should do it too. You don't have to make a big deal out of it. You know, you can just start with one or two students and get them to feel them. I mean, I know a lot of companies already do that. Let's give them the right project, freedom to do what they want. Running a good summer job program is good for the students. It's good for your company. And it can even be good for your profits. You don't have to take my work for it. I mean, I'd like it if you did. But we do actually have two previous summer job students from our company at the conference now. And they're here because the organizing committee has their own summer or student job program. So I'd like to applaud the Phosphorygian organizing committee for having the foresight to provide a student job program. If you want to talk to me or talk to them, there's one right behind that camera over there. Then, you know, just come find me and we can talk about this. So thank you for your attention. Thanks a lot for this talk, for this presentation. We do have five minutes for questions. So is there any questions in the room? Yes. It's Greg from Septima in Copenhagen. How do you deal with the failures? Because of course there will be failures. There always are with anyone. So how do you deal with failures? How do you make sure that people don't go back depressed? I don't think anyone's ever left us the best. Yes, this is Alexander who also works for us. Yeah, well, it's not like they're failing utterly. Like it's a really bad product that to me. I think we only had more or less successful technical projects. It's just that it's not a commercial success in the end. So, and I mean, student interns, if they leave our company and doesn't come back, they don't care if it's a commercial success or not. It's fun if it is, but they don't care if it's not. So it's not on the commercial side of it, I would say that it's not as successful often. Yeah, and like I said, part of it is really on this interview. That if we feel that it didn't gel and you didn't work with our environment or our environment didn't work too, then it's fair it's a fair thing, you know, it's just you have to be honest about it. We don't think that we don't think that this is a relationship that's going to last me in the future sort of thing. But I mean, apart from that, we're very fair. I mean, we give them all references and it paid well for the time and everything. So I don't think anybody goes home to fast. Yeah, you've talked a lot about hiring students and everything will be fine. But in my opinions about hiring the right students, you have any words on that? Yes, I mean, we do because this year, we have nine students last year, we had eight students, but I think we had 20 something applications. The thing is that as I'm sure you all aware, our field is actually pretty narrow. So there aren't a tremendous amount of students in order with the right skill sets. You know, at my university, a typical year, maybe two or four graduate. So there's been years where we basically hired every student, but there is. And there's also been years where we've basically given offers to every student there is. But this was something I think I wrote more about in the abstract and actually ended up saying at the talk here. But I mean, part of it is simply the competitive edges. It's not just that we're evaluating students to see if they're good enough for us. It's that we're marketing ourselves to students and saying, hey, we're good enough for you. Because I mean, we're a great company and people love working for us, I say. But we don't have the absolute highest salaries in the business because that's basically anything that has to do with oil has automatically the highest salaries. So we just have to convince people that we're a nice place to work. And the best way of doing that is having people work for us. And then they can tell others about it. Just a question on the mentoring. Because you said they come in the summer when people are away. So I'm just wondering, are they actively taught things or are they expected to pick it up on their own? Well, we have, I mean, yeah, the mentors are basically, the mentors are picked in advance based on their holiday plans. So a mentor won't be on holiday for more than two out of those six weeks. So they'll be available. And we give them, I mean, now we spend two days at the start of their summer job period where we, because they're at three different office locations, but we gather them all together at the main office. And we have one full day of propaganda of how great this company is. And this is how you do your time sheets. And this is how the company was founded and stuff. And then we take them out and wine and diamond. And then next day we have a full day full of, well, it's the company wide GIS internal conference. They kind of, we all update each other and stuff. And they get to be on that as well. So we do a fair bit. Again, that's a development back when I was a site, you know, it was just like, here's the desk. And when I was finally, when I was permanently hired, everybody assumed I knew everything about the company. And in fact, I didn't know, didn't really know quite a lot. So that's kind of figured out, which is why when I got my hand on the summer job program, I decided to push for proper way of things at the start of it. More question. We have time for one more question. Maybe one last question would be, do they have, do the student have access to a month or like every day or do, like, you help them get organized at all? Or, and can they ask questions to anyone anytime? Oh, yes, definitely. I mean, that's, yeah, I mean, sounds very corporate, but a core company value for us is that we're friendly and available questions. And I mean, for some of the students and everybody. So, you know, they, they, they can ask the mentor and the mentor, if the mentor doesn't know the mentor tells them who to ask them, it's no problem at all. Okay. Yes. Well, thank you. Thank you very much. Select with high quality audience. Thank you. Thank you very much.
In Norway, the demand for technical talent in our field far outstrips the supply, so when you want to recruit the best, you have to sink your teeth in them at an early stage. Since 2010, Norkart has run a summer internship program where we assign students projects with a lot of freedom in how to reach their goals, and encourage them to explore new technologies. Unsurprisingly, this often means a lot of Open Source software! Some projects end up as writeoffs, some have modest returns, and one has even made a 1000% return on investment... This talk aims to show how the program has recruited good talent, enchanced our image, and given us new impulses that have changed our corporate culture and led to an expansion of our market offerings.
10.5446/20376 (DOI)
So, we continue with the next speaker, Clemens Portela. I really like it. So when I saw the opportunity to be a speaker on this, a chair on this conference, I saw this slot with Athena, Clemens, and George. I said, oh, that's the one I want to present. It's a good group. It could be a good work is done. Myself, I'm a frequent user of the Sintest framework, which Clemens is going to present. So I'm really looking forward to his presentation. And he just told me that this is his first fast forward G. So although he has visited hundreds of OTC meetings, this is the first fast forward G. So let's embrace Clemens here. He has had a couple of very nice open source projects, like Shape Change and now the Sintest framework. And these are really useful tools in the SDI development. So some of I pressed the wrong button, so you don't see any slides right now, but hopefully we'll get that fixed. But I'll start anyway. So this is a presentation that I'm giving. Basically it's three organizations that have been working on this activity. It's a presentation that's kind of split into two halves. It's about validation services and data and an SDI. And first, what we're looking at is on a large European project, ELF European Location Framework that is working on a large SDI activity and which provides a background for what I will be talking about and relating to the experiences that we have. And the second part will be on an open source software that is mostly has been developed by Interactive Instruments by our company. And so I'll present also a little bit about that. So these three organizations that are contributed to this is the Cutfacket, the Norwegian mapping agency. Roy Malham, he is actually the work package leader in the European Location Framework project that tries to build up all these services providing the data. And also Thais Prenchens from Geonovoom in the Netherlands who also contributed to the testing. All right, so I guess I'll still continue, right? So I provide a little bit about background about the ELF project. It's, as I mentioned, it's a large European project. It runs for 44 months. It will finish in October and has more than 40 partners. 23 of those are national mapping and cadastral agencies, so they provide data. And it's their response to requirements on a European level to inspire and other requirements to provide their data as consistent reference data for all over Europe. So what that means is that right now we do have more than 100 services already that provide national data sets to basically to inspire and also to the infrastructure that the European Location Framework project also has built. So it's currently from 13 service providers. A few joined earlier this year are still working on their services. And all these services need to be validated and tested, right? If you have been here in the previous session, you've heard about that from the OGC perspective. And we need to test and validate all these services that they meet the requirements, the OGC requirements and other requirements that we're looking at. And in addition, what the project also does is because the idea is that you don't have to connect to the 100 service endpoints or eventually maybe 200 or more. But there's also the approach of providing central access points that actually cascade the results from the national services. So there's also a service, central service infrastructure that provides actually cascading access to the national services. And that also needs to be tested, of course. So it's kind of a challenging SDI that's been built as part of the Inspire developments and of course conformance and validation. Yeah. Thanks very much to the organizers. So the conformance is an important aspect because if you have a central node that actually cascades to web map services or web feature services, which is what happens there, then if you have problems in the services or in the data, then you have problems in the cascading integration server. So that is an important aspect of it all. And there is also when we looked at how do we test it, how do we validate that, that covers several levels. So we started at the bottom. Really it's the national data providers. They are responsible for testing their data sets, so the data. Then the national service provider, which is often the same, the national mapping agency, but it could also be some other organization, they have to test their services. And then on a central tier where you have the cascading services and also the security access control, licensing, et cetera, then it's the core team that actually manages the central infrastructure. And it's quite a stack of things that need to be done when you do testing. So for OGC conformance testing, we have the OGC compliance test that were presented in the previous talk, and that is used for that part. Then for service metadata, there is some additional, which is basically capabilities of WMS and WFS. There's also because we have inspire requirements, there is from JRCD, Inspire Geoportal metadata validator, and that is used for validating the service metadata of the services. And there are additional tests for Inspire Download Services and PireView Services, the GML encoding, and that's basically where the software that I will be talking a little bit more in detail later, the ETF tool, where we have specific Inspire and ELF specific tests will be, is used and has been developed. And we also, another aspect is to monitor the infrastructure and test capacity performance, and there the Spatino tools from Finland are used, and that is used to constantly monitor the infrastructure. And finally, for the data tests that the mapping agencies really do, what they use is the GIS systems that they use, so it can be as REE, ArcGIS, can be one spatial, Snowflake software, has some support, FME, there are also some specific ELF data quality tools, also some open source ones that have been developed, and these are used by the NMCA. So in order to get a working infrastructure, we have to make sure that all of that works. Just a little bit of a theory, if we look at it in practice, so the assessment of Roy, the work package leader from Cartwick, it is that basically there is still work to do. If we look at where the tools are, there's still work to do. From the validation point of view, so the functionality that they support, but also the error reporting, that's one of the big problems is when you see failed, you need some information, more information to understand what the failure is so that you can actually fix it. And that's one thing that we worked on in the ETF approach, and for those who have used the OGC site tests, you can do that as a developer. It's often hard, but it's doable then to try to understand what issue has been raised, but for users, let's say you have a mapping agency and you just want to run the tests and you get the result and you don't really understand at all why the failure occurs, that's a problem. So that's something that makes the cycle, the iteration process very hard and difficult. So that's something that needs to be worked on. It's easier for WMS because it's an easier service, and it's quite hard actually to get it right for web feature services. So we're in a status where it's possible to actually connect to the services, but it's really at a starting point. And more work will be needed in the future to actually increase the conformance of the different services. And as we've heard before also in Atina's presentation, the integration of the testing in the early development phases is an essential part. So you have to test early and then you can, it's much easier to fix the errors earlier than to fix them later in the process. So what I will be talking now about is the ETF activities that we have done. And maybe as a start, what we do is ETF, we call it a test framework because it doesn't run the tests themselves, but it uses other test engines like SoapUI for web service tests and base X, XML database for validating large XML data sets or documents up to several hundred gigabytes. That's why we use an XML database to actually do that. So some of the things that we did in the ELF project is, and that was mostly Geonovum's part, developing tests for Inspire a View Service test. So that's mainly WMS 1.3 and WFS2.0 tests and Atom and Inspire download service tests. What these tests are, they don't test the basic OGC compliance because that's what the OGC tests are for, but just the additional requirements that Inspire download service technical guides or view service technical guidance add. I'll have links to the resources later on the slides. What we developed are some additional tests that go beyond Inspire requirements. So when we have links between features, the Inspire guidance doesn't say anything, so we added additional rules so that we can, we have a reliable structure across the different national data sets and services to actually do so that you can follow links and have references in a consistent way between features and data sets. We also developed some plugins to SoapUI. We worked a lot, but there's still work to be done on the improved test reports so that it's easier to understand issues. We created a web application so that it's, and we hosted for the use within the project. We also added customization options for the report style sheet. The reports are in XML, also stored in an XML database and then rendered in HTML. We also have been doing some other activities outside of the ELF project. I already mentioned the support for large scale XML data sets using Base X. We also have direct support for Schematron tests and have made some extensions to the XML database to give it, provided with spatial capabilities because when we test the GML data, we also need support for geometric predicates. So geometric validation, geometric predicates and also spatial indexing is supported for large GML data sets. And we're also working right now on a major update of the core software to a new major version and I'll have something about that on the final slides. That looks a little bit white. I think that probably says ETF, so that didn't make it. So the PDF export looks kind of screwed also here. Never mind. So that's ETF and the user interacts with the web application and also the core. So it interacts with that. It connects to the services or the data sets that we test. Metadata falls also on the data set in that part because it's just basically some records that you're investigating and then it passes this information to the driver and the test projects are actually executed in the test engines. So as I said, currently it's soap UI basics. If we want to test additional data, potentially coverage data, then maybe we need some additional test engines as well. So we use soap UI, which has some advantages. I don't know. Maybe some of you know it. There's an open source version. There's also a closed source version, but we use the open source version of soap UI. And one of the big advantages is that you have a graphical user interface to actually develop your tests. That is much better than just writing it somehow in XML. And you have the mechanisms to rapidly test it, to test yourself, and then work on that. Which also gives us the opportunity because that's the test engine. So we can run the tests. You don't have to use the ETF, but we have the plug-in so that you can run all the ETF tests also locally in your GUI environment. But we also have identified some limitations. And part of that is reasons why it's sometimes hard to provide really useful test reports because there are some limitations on the capabilities when we see the workflow, the process that we need to go through when we test OJC web services, getting data, parsing the results, using that to create new test cases. That is sometimes hard to do. And basically, the standard test output is not useful. So we have done a lot of work to actually annotate also the test projects so that we can create more helpful reports. So that's just a screenshot of the web application. You just provide your URL. You select the test and you give it a name. So it's very similar to the OJC user interface that we've seen in the previous presentation. And then you get a result. And what we try to do is then to actually come up with messages that provide an idea of how to actually fix that. So here's a response for some that's WFS and Inspire requires that it implements minimum temporal filter and it detects that and reports that information. And that's just how the results then look like. You can find more information in the source code is all on GitHub. There's also an issue tracker. So for people having issues, identifying issues, that's what we use to actually track that. And there's also wiki pages that we need to put more work in to make the documentation is something that we still need to work on. We also have a Docker image. So most of the deployments really use Docker. That at least we use. So that's an easy way to get started, get going. Yeah. And we use also the badges in the GitHub repository. I mentioned we have test projects. So the GeoNovon ones, you have the link here. They are also on GitHub. Also with the issue tracker, we use the issue trackers as well there to clarify issues that we have in the project or also the GeoNovon tests are also used in the Netherlands where they are also using ETF for the national SDI. So the current users of that framework are really the ELF project. We are using it also in our internal continuous integration environment of our own products. GeoNovon, I mentioned they're using it and also the German mapping agencies, the lender are using it to validate, for example, the city GML data, of building data in their production workflows. And that's then also where we have several hundred gigabytes of city GML data that we're testing. One thing that we are working on right now and the idea is to present that, it will be presented in about a month at the Inspire conference in Barcelona that we're also working on using ETF in the Inspire test framework or the SDI Inspire test framework in the Inspire validator. And we're right now working on several extensions to what you've seen before. So there's a new API that provides not only the user interface access but also XML and REST API with XML and JSON encodings to talk to the validator, a much richer domain model with abstract test suites, et cetera, following the ISO and OGC test specifications, support for multilingual reports, et cetera. So several other things. And one of the things that we're working on that won't be ready this month or next, but that's on the plans also to add the team engine that we've heard before as another test engine so that we can also run the OGC site tests as part of ETF execution. And yeah, that's a mockup. That's actually from not really a mockup that was using the styling, but there's some advanced concepts already in there. It didn't run in the web application, but it was an executable test suite that took Inspire data, hydrographic data, and then we created a test report. So this is very similar to how it will look like, what you will see in a month when I think the idea is to make that publicly available as well. Okay. That's it. Thank you very much. Any questions from the room? Could you say something about how you test the view services? Do you do like an automatic image comparison with pixel difference or how do you handle that case? So we don't really do the, we don't copy the OGC site test. So we don't test the basic functionality. So we only test the additional, so the test, the Inspire View Service test only tests additional requirements that Inspire has above the OGC specifications. But we don't require any reference data in it, like the current OGC, that's one of the problems that we have also, I think, from an Inspire perspective with the current OGC WMS tests, because you have to have reference data. You don't have to, that with the WFS 2.0, it can work with any WFS, but with WMS you have to have reference data and then it does the image comparison. And yeah, I think there will be limits to what you can do automatically if you don't have a reference data set, because then you can't really, it's just, it's then, we basically don't test whether all the functions are exactly correct. We do some certain tests, but a test will never be that complete that you can be absolutely sure that every query is executed exactly right, because then you would need to have a reference data set so you can compare it. So typically we do that in our, let's say in our unit tests locally, but that's the idea of these tests is that they work with every service endpoint. So then there are limits to how far you can, how deep you can go with these tests. Somebody else? I have a question. So you're able to run CITI tests from EDF, maybe in the future. Is there also a work in the other direction? Does OTC and Lewis have benefits from the work that you are doing? Well, there is a, one of the discussions that I had with Lewis in March was if we, it would certainly be simpler if we harmonize or work together on how we express test results, right? Because one of the things when we do team engine integration is we can still work with the test reports that you can get from the team engine. And to be frank, that's not really good. So there will be limits of what we can include in the reports. And I know that in OTC there is also work ongoing to improve because that's a known limitation, right? So the idea is also there to work on new ways of actually encoding test reports. So Earl is one that's in W3C's draft specification for, I don't know, report length, whatever. It's a representation of these kind of test results. And it's one of the things that we also have in our design report actually for the Inspire test framework is that if OTC does that, we could also create, derive Earl reports so that we could use the same kind of thing that other people could build on top of that as well and you can access that also via the API. So there are certainly opportunities where that also works the other way around. Somebody, this is your chance, huh? Clemens doesn't have more presentations this week. Any shape change questions? So it may be good to mention is that recently Clemens and I worked on a project called LD Proxy which is also an open source project which enables a proxy layer on top of WFS to expose it to search engines and so on. So I think tomorrow we have a presentation about that and very welcome to join that. So looking for the next speaker, we'll start in five minutes again.
To achieve interoperability in a spatial data infrastructure (SDI), conformance to specifications is essential for services and data. Service and data providers need a capability to validate their components. For several OGC standards, the OGC CITE tests provide such a capability. This covers base standards, but in SDIs typically additional specifications are added, for example, service profiles or data specifications. In the European Location Framework (ELF) the test framework ETF is used to validate INSPIRE services and data provided by National Mapping Authorities against the INSPIRE Technical Guidelines as well as against ELF-specific requirements. ETF is a test framework for spatial data infrastructure components. It supports SoapUI (for testing web services) and BaseX (for testing XML documents, including very large ones) as test engines to develop and execute test suites. ETF has been implemented in several iterations over recent years as existing open source test environments could not be configured to provide uniform test reports that were readable by and useful for non-developers. Outside of the ELF project, ETF is currently mainly used in Germany and the Netherlands, partly extending the INSPIRE-specific tests based on national profiles. We present the approach for developing user-friendly test suites and discuss typical issues that have been encountered in the ELF testing.
10.5446/20375 (DOI)
Good morning again for the first session. And yeah, I introduce Andreas Hoczewa, Mark Jansen, keynote developers of OpenLayers, and I think you present news on OpenLayers 3. So have fun. Welcome, everyone. I hope everyone was enjoying last night's dinner party. Just as we did. And to get started, this is our logo, OpenLayers logo. And today we're talking about news and cool stuff in OpenLayers. To get started, the usual boring stuff, the first two slides and outline, we show you even more boring stuff about ourselves. And then we show you what's new in OpenLayers. Sorry, what's new in OpenLayers. What's new in OpenLayers. What's cool in OpenLayers and also an outlook to the near future. Okay, switch back to... It's still me on. OpenLayers core developer and steering committee member of the OpenLayers project, work as a consultant for Boundless. And Boundless invests a lot in open source development, is an active leader in the open source community, has developed and supported powerful software for enterprise GIS applications in that since 2002. Yeah, my name is Mark. You can hear I was yesterday at the concert. I'm a developer of OpenLayers, obviously. And I'm also a developer for the company Tillis, who is the CEO of. And I'm a developer not only for OpenLayers, but also GUXD. I wrote a German book about OpenLayers and I'm also speaking on national and international conferences just like now. So to rest with you might have heard about us. We have a booth upstairs. And we do all things open source geospatial and we build top notch solutions using the software tools that you have been hearing about the last three days. So now what is OpenLayers? We have to do that. So that's what it says on the homepage. It's a high performance feature pack library for all your mapping needs. And there's three things we wish to add to that sentence. It's open source, it's BSD licensed, it's a JavaScript library, and it's also an OS geo project. So OpenLayers has quite a long history. So 10 years ago we released version two. So basically I used to call OpenLayers some sort of a dinosaur in JavaScript libraries because like growing that old, you know, we really had to reinvent ourselves over and over again. So in August 2014 we released version 3.0 and the current version is 3.17, yes 3.17.1. But I think we are going to do a release today. So there will be some slides for mentioning 3.18. It is very actively developed. It has a big community of both users and developers. It is very well documented, that's what we think, at least compared to version two. And also it has a very huge examples collection. So it's basically usable everywhere and it's also being used everywhere. So this is a map. It shows last year's phosphor T location. If we zoom in here a bit, we see a nice residential area in our near soul. I always like these structures that look nice on aerial imagery. But this was last year in Seoul, in South Korea. Let's move to where we are this year. And by doing so I show you just some features of OpenLayers. We can do animated transitions of the view. Like here we are flying to one. And then we can zoom in further to our exact conference location. We can make the map full screen. And you may still be worried about the labels not being upright. So let's rotate the map to a northup view like you would expect. As you can see, when I pan the map here, there is always something to see because we have preloaded tiles from lower resolutions. Also when tiles are loaded, the focus is around the cursor. So tiles where I am with the mouse are loaded first. So this makes for a really pleasant and performant user experience. Just some basics on this slide here. Okay. So you just saw a map. Not really that cool. So what is cool in OpenLayers? This that we should mention is it supports many, many different data source and layer types. You can pull in raster data, vector data from many common formats, also from some exotic formats. We have interactions and controls that you can add to the map for your users to interact with it. The most basic one maybe the plus and minus for zooming. But there are also interactions for drawing and editing vectors and many more. It works out of the box on mobile devices 100%. And it also supports retina high DPI devices out of the box when iPhone 6 was introduced with higher resolution than any device before. Some other applications broke, but not the ones that used OpenLayers. Yes. So what do we do? So we already saw the rotation feature and I think it's the simulation already starting. So you can do cool things with this. So this is the OpenLayers developers. Every morning he goes by bike to the office location. This is Eric LeMond bicycling basically. And the center of the map is being updated correctly and all the time. And also now when he goes into the roundabout. What's he doing? He's taking a shortcut. He's going through the roundabout. It's Eric, probably Eric. And yeah, we can do cool stuff by combining all these small interactions into really cool applications. So what else do we? Oh yeah, there's some tiles missing. But that's not a problem. What I want to show is still on the slide. So we're using a tile server here that's not very responsive today. But one thing you can see OpenLayers supports projections out of the box. In these two maps on the left, we have mapping geographic coordinates. And on the right we have the more familiar webmercator projection. And if you look at these circles here, you can see how the projection distorts as you go from the equator to the poles. These are just two projection examples. When you have a projection definition with the transforms to geographic coordinates, you can use any projection that you want. Also your local projection from your land surveying agency or whatever. Yeah, also under the hood, OpenLayers applies a lot of, yeah, we call them vector rendering tricks. So that rendering of vectors is very, very fast in our opinion. So what you see here in the background is the blue lines. This is a fractal. It's a Koch flake. And it's made out of 700,000 vertices. But as you are currently at a certain zoom level, and also you're not showing everything of it, for rendering we call it oversimplification. Like we strip a lot of vertices out before even going to the renderer. And also we have some sort of an internal grid. So if two or more vertices are happening to go on the same grid, we will discard the second one. So we have to render only as few points as possible, also depending on the resolution. Quite some nice vector rendering stuff. You know earthquakes happen like this year in Italy. So it may be important to analyze data about earthquakes. This map here shows OpenLayers vector styling features using clusters and also advanced vector styling features because you can also change the original geometries and render something completely different. So as I hover over one of these clusters, they expand to the original earthquake locations and the stars you see are scaled by the magnitude of the earthquake. So with these advanced styling features, you can achieve good-looking maps that are easy to analyze the data you're looking at. Yeah, but not only can we do cool vector stuff, we can also do cool raster stuff. So what you see here is an example that uses Bing tiles in the background, aerial images, and it's calculating here the so-called vector green, no, it's the vegetation greenness index, and it's doing it in your browser. So it just analyzes every pixel and you can change the threshold down here. So if you do like this, I can manipulate all the rasters in my map to change color in this case. And this is all done by the raster sources we have, so we can put in any raster source like a WMS or in this type, pre-rendered tiles analyzed and further, and it's all running in your browser. This is stuff that used to be done on dedicated programs, but it can also be done on the web using OpenLayers. So what's new? Andreas, tell me. Good question. What's new? We've been working a lot in the last year. Here you can see a screenshot from a GitHub. You see the top eight contributors during the last year. And some statistics about that since version 3.9, which was last year at the Foss4G conference, we had contributions from developers from four different companies, also from two individual contributors. They came from seven different countries. And we have also many contributions, as you can see here, from GreenKeeper.io. But why is a bot making commits to OpenLayers? And even more than me in the last year. This is a bot that works on Git repositories, basically, and it keeps our dependencies up to date. So whenever some software we depend upon releases a new version. Like for example, the great R3 library by Vladimir Arbush. Yes. So when a new version comes out of that, we get a pull request with a commit by that bot. And then we can check it. And here we most possibly merge it then. Okay, so we're comparing now between last year's Foss4G conference and this year's Foss4G conference. And if we look at GitHub statistics again, we saw that we had more than 2,000 commits in that period. Almost 1,200 files were changed. And contributions were made by 21 contributors. But I know you like we promised some cool stuff and new stuff and not like details about what we committed and how often we renamed files just to, you know, there's a lot of changes. So what's actually new? What's really new. Okay. Let's finally do it. We won't see much on this example, unfortunately. But one thing we added since last year is Rusty reprojection. But the tiles we wanted to reproject are not loading or only some of them are loading so you don't see anything. But you can imagine there are many different projections. The world looks different in any projection. And this slide was meant to show you how that works using some world projections like this one here or also local German projections like this one. But you have to imagine the image. You can do that. You want to show the online example? Maybe even there's time left at the end of the presentation. Let's move on, maybe. Another thing we added is vector tiles. And since it first landed in 3.10 or 3.11, I think, we have made many performance improvements to it. So as you can see, this is really smooth as you pan the map around. Zooming is really smooth. There's no more lag. And always keep in mind that this is using Canvas 2D and not WebGL. So this means it also works on devices that do not have WebGL available. Yeah, you can also do the following. You can render now geometries anywhere. So what we see here is vector tiles in the background. And if I click on some of the features over here, like this building, for example, what happens now is that I get the geometry, a query for the geometry, and then I just render it to some random Canvas over there. So this is not a map. This is a Canvas that could be anywhere. A typical use case for this would be if you want to create a dynamic legend next to your map that is rendered with the actual styles you're using. Yes, and let me just add one thing to the slide that you showed that didn't work correctly. We can now reproject Raster in the web, Raster data. Let me just repeat that. It's just awesome. So also, we strive towards, well, more feature or comparability with what we had in OpenLayers 2. So in OpenLayers 2, we had a lot of very sophisticated functions to, for example, rotate geometries. And we also now support this in OpenLayers 3. So there's a small code example showing you how to do that. It's nothing so exciting, but it's really easy now to transform an existing OpenLayers 2 application to OpenLayers 3 because we're nearly at feature parity. So we can include now cart2db tile sources. This is how you do it. And yeah, you see how easy it is to style it. You apply a cart2css styling here and also the SQL that's going to be applied. And this is basically it. And then you can use cart2db sources in your maps. There's also another thing that we added. So now we support another yes or a right service. The ArcGIS rest source that we added. So there's going to be another talk by Bart soon about interoperability. So we support that as well out of the box. Another thing was frequently requested from users. Switching over from OpenLayers 2 was OGC filters for WFS queries. We have that now as well. So here you see a filter that's applied to an OpenStreetMap data set, the water areas. And we filter here for water areas in Mississippi and only for river banks. So these were the new features, not that many, but we made a lot of other improvements that you don't necessarily see immediately. But you see them here. On the left, you see how we updated params from a time series WMS or also dimensions in WMTS. This is something that's used often in meteorology for weather maps. And on the right, you see how it looks now. And this is how it should look. So there's no tiles leaving the screen and entering them again. It's just smooth transitions between tiles. And besides these improvements that you can see, there are also many that you cannot see. We've been working on a major restructuring of the library, resulting in faster builds. So the source files that go into the library last year were 3.8 megabytes and they were compressed and minified to 143K. And this year or a month ago, we had only 2.9 megabytes that go into the build and they get minified to 140K. And let's not forget about the features and improvements that were added since then. And the library got smaller. So that's good. And how did that happen? Yeah, to understand the present and the future, one has to look at the past. So this is a discussion that Tom McRide and Eric Lim one, who saw in bicycling earlier, made in 2012. So Tom basically says, I'm worried that you are using Closury Library because it inspires big projects to be big and glued together. And Eric points out, yeah, you're probably right, but there's also a lot of advantages we can take from using Google Closure Library. And that's true. So internally as developers, when we started working on OpenLayers 3, we were thinking, do we want to reinvent the wheel and have all the browser differences? Do we want to handle them ourselves? Or do we want to include some sort of a basic library that helps us doing these things right? And also includes a lot of cool things to get really the best or the smallest code possible or the fastest code possible. You want to add something? Yeah, but things have gotten better on the web. We don't have to support IE8 anymore. We don't have to support IE9 anymore, but it still works in IE9. So there's no need anymore to have a library that takes away these browser inconsistencies from you. And so last year, we sat together at the Foss4G conference, and we were aware that our users want and do use mainstream build tools and bundlers. And Closure Code does not integrate well with that. So we said, let's at least remove the dependency on Closure Library, because then people who use Closure Compiler can still do so. But those who want to use tools like Browserify can also do so without problem. And that's what we've been working on in the last couple of months. And this effort is now almost complete. We have removed 98% of the Closure Library. Will be done in the next couple of weeks. Out of the event system we had in Closure, we added our own lightweight event systems. We made the metrics transforms that we use to transform coordinates, to screen coordinates, way more efficient, and we also have a more efficient class inheritance. And we were able to get rid of all this legacy code for outdated browser. And finally, also the code looks better. It's easier to contribute to OpenIRS because it is now a JavaScript code that looks like JavaScript and is JavaScript as it was meant to be. So and now what's next? This is how your OpenIRS application could look when we have finished the transition away from the Closure Library. Because one plan is instead of what we have now, so like in OpenIRS 2, you can currently create custom builds of OpenIRS. So you take the full build and strip out things that you don't need. In the modern JavaScript world where you have modules, it's the other way around. You only include the parts that you need by requiring them explicitly. So instead of including OpenIRS and then using what you need and still having there what you don't, you create your application like this. You only include the modules that you require in your application. Then there's more things to come, like more rendering improvements. There will be a talk about vector tiles in the afternoon where we'll be showing some of that. But this is it for now for this introductory talk. Thank you very much and I hope there will be questions. Thank you. Perfect time. Questions? You mentioned the vector tile rendering support on Canvas 2D. How about rendering them in Open in WebGL? Is that on the way in the work? So OpenIRS does have a WebGL renderer. It currently supports only point features. There has been work going on during the summer from an individual contributor. He's trying to bring support for WebGL rendering for lines and polygons as well. But I cannot make any promise at this point whether it's going to happen. But one thing I can say for sure is that with all the rendering tricks that we apply to rendering to Canvas 2D, we are very fast. And the fastest Canvas 2D library that renders vector tiles. Okay. More questions? No, you don't. Of course. Hi, I'm Ivan. You know me from the LiveLed team and last year OpenIRS was LiveLed. So let me say congratulations on the features. I am very jealous of the rasterly projection, really. Is there anything from LiveLed that you wish you had? Because let me say this straight. From LiveLed, we want to have a lot of things that OpenIRS has. Is there something the other way around? And it's a good question. And for me, it's always a bit like comparing apples and oranges. But one really cool thing about LiveLed is that it's so simple to get started with it. In OpenIRS, you have a bit of a learning curve to get started. But then you have a verbose toolkit that allows you to do a lot. And with LiveLed, you can get started immediately. And that's one thing we are still working on. So we are still trying to improve the API and this going to be a lot of discussion also to have. But we are working on it. I think we'll never be as simple to get started with as LiveLed. But as I said, the target audience is also maybe a different one. Yeah, originally I had a different question, but picking up on that, I can fully underline what you just said about the best of the two worlds. Because actually I do code in both of them. And it just depends on the project. And so I wouldn't like, like you said, it's apple and oranges. But the actual question I was going to pose to the OpenLayers team was one of the features I was most excited about when I started out with OpenLayers was the style functions. So I'm having currently a style cache, which is not that big because I don't have to fortunately support that many layers in this particular project. But once I want to go down to, say, zoom level 18 or 20, and it would have lots of styles, I probably have to implement something like in browser index file, something. Would have to look into that. So my question is, is there any idea about actually implementing that in OpenLayers itself so it can be used with it, or would there rather be something like for part library? You mean something like sprite images for symbols? Or I think I didn't understand the question. No, actually, pre-rendering some styles to actually load them out of the cache so that the function itself don't have to return it each time. And we're talking icon or image styles here for points. When you pre-load your images on the application level, you can get that out of the box. So you can create your icon styles either with a source that points to an image, or you can construct the style with an image that you have loaded already. So you can do the pre-loading on the application level. This is also something that's nice about OpenLayers. We tried to make simple and meaningful defaults, but there are several levels at the API where you can hook into and customize things to your needs. And using pre-loaded image elements instead of pointing the icon style to a source is one of the things you can do. So you can do that already if you do this at the application level. Okay. Thank you. So here you will find these guys around here at least at the terrestrial and bundler's booth. And we change for the next talk. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
OpenLayers 3 aims to be a full-featured, flexible, and high-performance mapping library leveraging the latest web technologies. Since the initial release of 3.0 at the end of 2013, the library has matured significantly, and great new features and improvements are rolling out with each monthly release. Are you are still using OpenLayers 2 and feeling that the time has come to upgrade? Or curious to see what a comprehensive mapping library can do? Join us for this feature frenzy of OpenLayers 3, where we will present our recent and ongoing work on making the library more user-friendly, robust and powerful. Whether you're a developer or decision maker, this talk will get you up to date with the current status and upcoming features and improvements of OpenLayers 3.
10.5446/20373 (DOI)
I'm going to present two small libraries, way out of league as we've seen with Mapbox, GL and Leaflet, GL. So this is something completely different. I'm talking on behalf of my company, EOX, and one of my colleagues who is not here, which is this guy with the coffee mug. This is me, Favre Schindler, and I've got on my shoulder something else to keep me awake. We come from the Earth observation part, so we deal with Earth observation acquisitions, satellite images, and also with model data. The current status is either you have it in a desktop application, I think of the Sentinel Toolbox and so on, or also QTS and others, or you have it in a web app and you have it either pre-rendered or you render it on demand. So for example, you send a WMS request and get an image back, or you send a WMS request with a specific style and specific parameters and get a rendered image back. But this is not what we wanted. We wanted to have the Rust exploration fully interactive, so like I get the data once and I render it depending on the parameters on the client side and I get an image. We also wanted it to make it work without any plugins like Flasher, who use Flasher anymore, so we just wanted to use open web standards. And we also wanted to make it so that it's easily integrated into web mapping frameworks like leaflets or open layers or a cesium. Okay, fortunately for us, HTML and other web standards have made a huge leap in the past years, and these are just a couple of tools that we are going to use in the project. So one is the Canvas API, one is Web Check, we've learned a lot of it, and also typed arrays. They are not really separate because they're all kind of intermingled, but those are the three buzzwords I'm going to use. Okay, so the problem is split into two parts. One is transferring the data and one is rendering the data. So for transfer, there is a challenge because browsers do not support any scientific data or Rust data formats. So they usually support RGPA, like PNG, JPEG, some also support WebP, and one workaround would be to, if you have like Float64, Float64 wouldn't work, but I don't know, Float33 and other data types to encode them in the three or four bands of the rusted data and then transmit it to the client, decode it, but this is quirky. It's really quirky because it's not self-explanatory anymore. You also have to have some metadata to explain how the data is laid out and how you have to decode it. And also, as I said, it also requires specific decoding and it's only working with this specific data type. So we thought, what could we do? We use another image type and write our own decoder for it. This image type is GeoTiff. It has a very good software support. It's widely accepted. You may have heard of it. It's around like four decades and it's flexible. It's very flexible. So flexible that it's sometimes translated as thousand in competent five onwards. Tiff is also extensible. This is how the Geo comes to the GeoTiff. With GeoTiff, it's able to encode geographic metadata and do something on the client with it. And this is also interesting because we deal a lot with open standards like WCS or GC standards. There is a GeoTiff application profile where you can specify the exact parameters how the GeoTiff will be laid out, which helps afterwards. Okay. So we created GeoTiff.js, which is a pure JavaScript parser for GeoTiffs. And it's, as we said, it's just using JavaScript and type the race and data views to decode the data. Okay. Which is kind of boring, but this is what we actually achieved with GeoTiff.js. So we have a full implementation of the Tiff spec, titled stripped, banned, or pixly interleaved. We almost support everything. We lack behind in the compression. We currently just support uncompressed or packed bits compressed. Okay. So this is a JavaScript presentation. I can already show you how it would look like. This is an RGBA GeoTiff and it's parsed and it's presented on the client. So nothing special here, but this is actually a GeoTiff in the background. Okay. So this is how it looks. I should have warned you. This is a little bit on the technical side. I'm using the fetch API to get the Tiff. I create an array buffer out of it and here I parse it. I get the image and this block below is just for copying it to the canvas. You don't like it? No. You don't iterate the pixels? No. Okay. Okay. Show me afterwards how it's done correctly. We'll have to talk with you. Okay. Okay. So this is RGB or RGBA data, but what about other data? And Plotty. Plotty is also a small JavaScript library. It's also using the Canvas API and it's basically transformed this data through color scales to have some nicer output. So this is how it looks. This is just some artificial data and yes, it's very trippy. So this probably for a Gease conference is not suitable. So this is just how we laid it out. So this is how we created and there's, this is the black box you talked about. So this is the black box. Everything is hidden behind this and this is ugly indeed. Okay. Okay. So this is a radar image and this is an interactive demo. So I can choose the color scale and I can also use fancy sliders to visualize only the data I want to see. Wow. Really cool. So yes. So putting it together, I created a small open layers widget and put a digital elevation model on it. The digital elevation model is transmitted using WCS. It's parsed via GeoTFJS and then it's put on the map widget. So this is fully interactive. All right. This is a YouTube video of a project we do and we are using a cesium map widget here. So in this case, we have a model data and this model data is also parsed via GeoTFJS and then rendered. So you can see, maybe I should pause this. Okay. So what we did here, we just selected the layer behind the WCS server, supplied the GeoTF and then it's parsed. And so we again have some sort of widget here to control the rendering. So in this case, we used the bands, the stacked bands as a Z value, so like the height. And I think it's very beautiful to see, to have a volume rendering directly on the cesium globe. Yeah. So in this case, you're only getting the type of value you're actually interested in, which is very interesting for scientists. So this is just the usual cesium functionality, so this is nothing from us. Okay. Okay. So. And then we also played around and made some sort of animations out of it. So this is all, this is how it should look like. This is actually an animated GIF. So we tricked it a little bit here because the rendering performance is not yet satisfactory. So this is actually a model output of a volcano, a mission of a volcano in Iceland. SO2 emissions here. Okay. There's still a lot of work to be done. As I said, we'd like to support all of GIF and this also includes LZV and deflate compression. For plotty, we'd like to improve the performance, especially when we're dealing with larger datasets or many datasets like the layered datasets we saw earlier. And also we'd like to provide some simple and easy integrations with open layers, leaflet and cesium so that you just have to add a plotty layer or whatever and GIF source and that's it. Okay. Thank you for your attention. So thank you. Thank you for your questions. Thank you very much. So one question, geotifs can potentially be pretty large and I'm not exactly sure because I didn't use WCS that much. Does WCS support decreasing resolution and then just giving you like lower size, five size? Yes. WCS has become quite flexible in this regard. There is even an application profile for GIF so you can even define the internal tiling of the GIF if it's a larger one and so that the reading of the small tiles of this GIF is faster afterwards. So yes, you can, but you can also request reduced resolution. So yes, it's quite flexible. And on the server side, what kind of WCS solution would you use there? So because you probably don't want to have like a fully-fetched thing like GeoServer for that case, but like a slim library which just has WCS, what kind of solution do you use there? And for example, one would work is just a map server because we did the geotif implementation in map server. So the geotif WCS implementation, so we know it's working. Yes, so map server would be one solution, another one is another open source project. We're not having the chance to have a talk about this UX server. So all the demos is like with this UX server, but this is more like a full-fledged solution like GeoServer. So any other questions? More. Yeah. Back there. Yeah. In the web mapping applications, we are usually using the big amount of data. For example, if you want to present the topographical area for whole country, we need to transfer from the server, from web coverage service to the browser, for example, plenty of megabytes or also gigabytes. What is the sensible amount of data to use the geotif JS? What is the amount of the data which has sense to present to the client, not to let the client wait 20 minutes? Yes, this is actually a tricky question. It would be easier to answer if the compression was already implemented because then you wouldn't lose as much, the overhead wouldn't be much higher than, for example, JPEG or PNG. But in this case, it's hard to say. I think it also depends on the use case. For example, if you're just inspecting one single image and if the initial waiting time is okay when you can do fast rendering afterwards, then it would be a good solution. Otherwise, you'd have to use the server-side rendering. It's hard to say. Have you looked at using in-scripten or something like that to compile the C compression libraries for loading into the browser for decompression? Yes, we did. I think not too far ago, I heard that there was actually a Google YAM script, I think. You did it? I did not. You did not? Okay. Yes, we looked into it, but GATF.js is really small. It's just, I don't know, it's a couple of hundred lines of code. So it's really small, really fast, really lean. And it supports almost all of the TIFF specs. So that's kind of cool. So it has a small footprint and this was one of the aims. Actually when we researched for it, there is an YAM script compiled version of the original LIP TIFF, but we did not get it to work with other data than RGBA. Okay, so it didn't take that long. So, okay. Okay. Any other questions? Okay. Then thanks again.
Exploitation of Scientific Raster Data stored in large online archives used to be cumbersome: either the data has to be transformed in an RGB version on the server using parameters supplied by the client, or the original data is downloaded and then inspected using a desktop GIS system. Browsers without specific extensions simply were not capable of dealing with the types of data found in scientific context. Today with HTML5 and WebGL browsers finally have the necessary prerequisites to create tools to dynamically visualize and explore scientific data sets. geotiff.js is a small JavaScript library to parse GeoTIFF files containing any kind of 2D raster data. The library handles various different configurations and common data types far beyond RGB data. On the other hand, plotty.js provides functionality to dynamically style 2D arrays for visualization using either predefined color scales or custom ones. In the presentation, I’m going to show how both libraries complement each other to allow a very dynamic form of data exploitation. Additionally, it will be shown how the techniques can be applied to more traditional Web Mapping concepts as dynamically styled data is displayed on a globe widget in various forms including 3D data cubes and time series of data.
10.5446/20371 (DOI)
Hello everybody and welcome to this session. My name is Jakob Ventin. I come from the National Landserver Finland. And first off we have Oliver Tonhofer who will give us a speech with the topic Magna Carto create map stylings for map server and map nick. Just edges yours. Okay thanks and yeah good afternoon. Yeah right I will talk about creating map stylings for map server and map nick with the help of a new open source tool we developed called Magna Carto. Few words about myself. I'm Oliver Tonhofer. I work for Omni Scale which I also co-founded. We are company from the northern part of Germany. We do a lot of open source development, client side, server side. We developed also other open source tools like map proxy and impossible. But we also do open provide open street map services. Maps Omni Scale is a product of us where we provide WMS services based on map nick. But we also provide custom map services mostly for agencies, governmental agencies based on open street map but also based on official data. And these services are mostly based on map server. And so we are in the business of creating map styles. Okay Magna Carto. What is Magna Carto? It is a style processor. So it is similar to Timel or Carto, the tool, not the company. Carto is the tool that is behind Timel. And you might know this tool. Magna Carto is something similar. It reads Carto CSS. Carto CSS is a markup to define map styles. It is similar to CSS, the markup to style web pages. And I will go into a few details of Carto CSS in a minute. Magna Carto then converts Carto CSS to Magna XML files. And you can use this styling to produce maps with map nick. And this is something that Timel and Carto does as well. What is unique with Magna Carto is that it also writes map server files. So you can have the same styling with map server and map nick. Magna Carto is a command line tool. You define your style and just call the Magna Carto tool with a builder, in this case map server, and you get a map file. And it is similar to, with map nick 3 and map nick 2, you just choose another builder and get the output for map nick. Magna Carto comes also with a web client. This web client allows you to have multiple interactive map views. It supports automatic reloading. So as soon as you make a simple change, your styling, all the map views update, you can edit layers in this web client, create new layers, modify them, reorder them, etc. Magna Carto is not point and click. So there is no color chooser like you are used to in QGIS. It is BYOE. It is bring your own editor. So you can use the tool you are most familiar with. If you have a VI user, you can use VI. If you are an e-max user, you can use an e-max. You can use your editor. You are the most comfortable with. Magna Carto CSS, the marker, is really powerful, but it has a steep learning curve. So as I said, it is not point and click. It is not you sit down on your computer and just click the map together. You need to somehow learn it. So it is not the right tool for building, if you just want to build a simple map with just a few displaying, a few points of interest or just admin, etc. But it is the right tool to build complex basemaps. The OpenStreetMap project, for example, they have a complex basemap and they do actually use Carto CSS for creating their basemap styling. To give you an impression on how much easier it is to style maps with Carto CSS, the OpenStreetMap project, Carto files are about 7000 lines and this generates 35000 lines of Magna XML. You can imagine that it is easier to maintain and work with 7000 lines of Carto instead of 35000 lines of Magna XML. A little disclaimer, the OpenStreetMap Carto project, they use the Carto tool to build the actual Magna XML file, but they are contributors that use Magna Carto to test style changes locally. Magna Carto is really fast, so just starting the tool, the command line tool, is within a few milliseconds and even building complex styles with Magna Carto, like the OpenStreetMap styling, just takes about a second. Carto itself takes 789 seconds for the same styling. So this is really essential for doing rapid iteration because you normally, you are not writing down the style and you are done, you have to tweak it, you have to adjust the colors, the tiny bit, the road width and you, yeah. So it's important that this is really fast. So how to get started? You can go to Magna Carto. Right now it's just forward to the GitHub repository. It is open source, it is written in Go, we provide binaries for Linux, MacOS and Windows, you get the command line tool Magna Carto and also the web client. The web client also requires either the Mapsurf binary or if you want to view MAPNIC maps in the web client, you also need Magna Carto-MAPNIC binary, which we at the moment do not provide as a binary, so you have to compile that on your own. So how to create map stylings? Step one is to define your layers. You can do this with an MML file and Magna Carto and you can use it as a support JSON and JAML format for that. This is a basic layer. It's got a name, a geometry type and this case line string, a data source and this case is a PostGIS database and you really define an query. So we can just say, okay, we want the data from the road table, in this case we also filter that. So you have a nice form, we can define the queries, there's even a little editor with syntax highlighting. Step two is to style the layers. You write one or more MSS files and a basic MSS file looks like this. You got a reference layer name and then define all these style options for this layer. There are quite a few available style options, but these are for all possible things you can do with Magna Carto-MAPNIC, for example. For polygons, for example, the list is a lot shorter and the common polygon styles are just basically only the fill is most important, opacity, maybe gamma. So you have to remember these things. As I said before, there is a learning curve, but most of these options are pretty easy to remember. So how to style a polygon. This is the easiest case. We have an area layer and we just say, okay, we want a polygon fill with this color and then you can generate with Magna Carto this style block. This is for MAPNIC and XML format and this is for MAPSERVER, the class block. Magna Carto also produces the rest, so the layer definition, it outputs it also as XML or as a map file, but I didn't show that here. In the end, you get a green polygon. How to style lines? This is similar. Here you define the line width and the line color and here we have now a simple road network. But of course, MAPS should look a bit different. You don't want to have all roads to look the same. So you can have filters. With filters, you're using square brackets and the type in this case is the column name. We have in the database and we're filtering on motorway and primary and say, okay, motorway is red, primary is blue. We get this block for MAPSERVER with expressions where this filter is inserted and we have our final map. Stiles for different maps case. Normally you say, okay, I want to have my buildings only from this zoom level on or I want to have my roads within a specific zoom range in a different style. You can do this with zoom filters and they are converted to skater and denominator filters for MAPSERVER and MAPNIC. You can combine multiple filters by just adding more filters and it gets converted to ant expressions. Or filters, so you can just add multiple layer and filter definitions for the same style. Nesting is another really important feature. You can have base options for the road layer and then you can define, okay, but motorways should look the same but with line width of 10 and the primaries with line width of red. So you don't have to, for all layers, you don't have to, in this example I have line cab and line join should be round. I don't need to define this for every type. I just do it once. And this is really powerful feature. Variables are also really useful when you work with more complex map styles. You can just say add and then the variable name. It works for all variables, colors, integer values, float, etc. And you can reference them with an add again. You can work with aromatics, so you can say, okay, my motorway should be one pixel wider. You can do multiplication, subtraction, division, color functions. Cato CSS supports a few functions that work on colors like light and dark, etc. There's a mix function, so if you have two colors, you can mix them. And this is really handy if you want to produce good looking maps with different colors. So in this case I define that grass should be green. And here I have three polygon types, park, forest and cemetery. And I say, okay, the forest should be a bit darker and cemetery should be a bit lighter. I only defined green one time and then you have different colors for these different types. And there is even perceptual color functions. They are ending with a P, so it's dark and P and light and P. And they typically choose a bit nicer colors. They don't work on RGB color space like these regular functions. And what's now important is you can split your styles into multiple files. You can have a pellet MSS file where you define all your variables. And then you have, for example, only for the areas one MSS style file. And you can just change the one single color, the variable for grass. And now you have, instead of a greenish map, you can have a brownish map. So this is really easy to get new style variations. You can combine, even combine styles. Like here we have outline and polygon fill. So we can get a result like this. We can have multiple styles. In this case here I have a bold outline. And in that outline I have a dashed line. And we can do this by prefixing the fill and line option with a name like base and here dash. And first this gets rendered and then this again on top. You can also use that to style outlines, road outlines. Normally you are drawing a white-black line and then a little bit smaller line in the middle to get this. But at least with MAPNIC you are getting, because each style, each geometry is rendered two times directly after, yeah, they are rendered directly. So you have these artifacts. You can work around that by rendering the same layer twice. This is called attachment in Kars CSS. You just say road outline, he wrote two columns inline. And then you have everything, first you render all the outlines and then the inline. There are also classes. You can assign every layer multiple classes. And with this we say all layers that are in the label class should use this font name, this basic font size, this color. And then we can just create our layers, road labels and place labels and just overwrite the options that are special for these layers. And with a tiny little bit more of work you get complete map. Of course it's a little bit more work. This is rendered with MAP server and this is actually rendered with MAPNIC. So they are actually looking really similar and they are actually pixel perfect, at least for lines and polygons. They do use the same rendering library behind. And so yeah, you can have the same identical map with MAP server and with MAPNIC. And text labels are different because they have different algorithms for placing the label. But the quality at least since MAP server 7 is equal. There are a few differences between MAP server and MAPNIC. The font sizes are based on different DPI assumptions, halo sizes and one uses radius and the other the complete width. Magna kato compensates these differences and right now MAPNIC is a reference. So the sizes for MAP server map files are adopted so that when you say, okay, I have a font size of 10, then the font size in the map file will be different. Just so that the output from MAPNIC and MAP server will look the same. But we might make this optional in the future. Other differences, there are a few advanced features in MAPNIC or MAP server and they're not supported like label leaders or dynamic symbols. You can't write that in Kato CSS. So this is not supported. Data driven styles. It's possible to say, okay, the line width. I don't define that the line width is 10 pixel, but the line width actually is in the width column. And since MAPNIC 3 is supported for nearly all parameters, MAP server also supports that for a lot of style parameters. But right now this is not supported for most style options in Magna kato. In the future, if still the MAPNIC MAP server errors are not shown in the web client, you have to switch to a console. We like to change that. We still haven't done any official release. We provide binaries, but not an official 1.0 binary. And yeah, we still have to work on the documentation, especially documenting the available style options and the differences between MAPNIC and MAP server. And we might add more output builder. Right now we have builder for MAP server and MAPNIC. But yeah, it would be possible to add builder for G7, QGIS, OpenLayers, et cetera. But we have limited resources. So these output builders won't happen just by us, so we would need contributors or sponsors for that. And I guess some of you are developers here in this room, so the hardest part is already solved. We pass kato CS as resolve all the variables, expressions, et cetera. In the end you just get a set of simplified rules. And so to create a new output builder, you just take the result from Magna Kato, iterate over layers, iterate over the sorted, already sorted filters and rules. And for example, the MAP server and MAPNIC builder, they're only about a thousand lines of code. And a lot of this code is just really simple lines. So yeah, last slide summary. Kato CS as is a powerful tool to a powerful marker to create map styles. And Magna Kato is a tool that works with this marker language. Right now it works with MAP server and MAPNIC. And yeah, it is fast and extensible. And I hope you all try it out. So thanks. Thank you Oliver. We have time for some questions. Does Kato CS support the true cascading like in the CSS from the web? Do rules interact with each other in terms of specificity overriding each other if one is more specific than the other? Yeah. So you can, yeah, it's cascading like what I had on my nesting slide. So you can nest this in the styles and you can have the right styles for the same layer and different files. And yeah, so the ordering and yeah, it's supported. Nice tool, thank you. Since there are still some differences between MAPNIC and MAP server and there will be more maybe if you include other outputs. Do you have a mechanism to write Kato CS with some deciders or whatever you call it? So if for MAP server output use this, if for... Yeah, you can do that in the builder and we are actually doing that in the builder. So we compensate already a few differences between MAP server and MAPNIC and this all happens inside the builder. We have one question and maybe we will get one more. MAPNIC supports more input sources and my question, can also style vector tiles with Magna Kato since MAPNIC could support that input or shapefiles? Or do you have first class support for that built in? Basically the layer definition is more or less directly passed to MAPNIC. So in the layer definition I could specify vector tile input? I haven't tested it but I'm not sure. If it works with tile null then it should work with Magna Kato as well because in the layer definition, the MML file you're just writing down key values and these key values are passed in the MAPNIC XML as parameters. Awesome, make styling even easier. It might work. As you said you can use whatever editor you want. Which editors do actually have syntax highlighting for Cartel 3.0? I'm using Sublime text editor. It does support syntax highlighting, text mate. I don't know how about a VR or something. How about the classics? I don't know. The editors I usually use, which are Emaxon or Cho, they don't support it. At least you could try it with... I have been looking for one maybe item. I don't know. If there is a short question we can take one more. Can you write Cartel CSS back? No. Why not? It's a very interesting thing because Tom McLeod from Mapbox said it's not possible to parse Cartel CSS in an abstract definition. The output would look completely different. It creates a set of simplified rules. Even if it would support Cartel CSS as an output, you would just have a huge list of really simple rules. A lot of duplicated rules, so all the cascading, the nesting would be lost. Okay. Thank you once again.
Magnacarto is a new open-ource tool that makes it easier to create map styles for MapServer and Mapnik. It uses CartoCSS - a styling language similar to CSS - to create both Mapfiles for Mapserver and XML-files for Mapnik. CartoCSS provides powerful functions: You can create base-styles and extend them for specific map scales or attributes. This avoids unnecessary repetition for similar map objects. CartoCSS styles are typically just 1/5th to 1/10th of the length of comparable mapfiles. With variables, expressions and color functions (darken, lighten, mix, etc) it's possible to create new design variations by changing only a few lines of the style. Magnacarto comes with a modern web interface that shows the final map design with MapServer and Mapnik. Live-refresh and multiple map windows makes it easy to directly verify any changes made to the map style. Additionally, there is a command line tools to automate the conversion of CartoCSS to Mapfiles and XML. The presentation briefly talks about the history of CartoCSS and Magnacarto. It shows important functions, how they are used in practice and it discusses the power and limitations of CartoCSS. It will also show new and upcoming features and possible extensions (SLD).
10.5446/20369 (DOI)
Hello. How's everyone doing? Do you have some coffee? No? Yes? Today we are going to have a presentation from Yohchi Kajama. He is a senior scientist in the Jewish special laboratory of the Aero Ashashi Corporation. And today he will be presenting. So please, after we're finished, please open up the floor for questions. And he will do his best in speaking English, and he may need some assistance from time to time as well. So if you guys could be so patient with us. So thank you very much. Continue. Hello. Hello. My name is Yohchi Kajama from Japan. And sorry, I'm not good at speaking and hearing in English. I am a O.S.J.P. board member and a GIS association of Japan, the member of... First for the special interest group in GIS association of Japan, a member of the Aero Ashashi Corporation. And working at the QGIS community in Japan and working at Aero Ashashi Corporation. So 2013, Z8 has announced the open data chapter, a charter. And then many governments have begun, published their data on the Internet. Many sites of French, German, and Russia, and Canada, and to... UK, United Kingdom site is very old and famous. And they have using such a site using SIKAN. And there are many data sets and metadata in SIKAN. And we can search metadata. And there are metadata and resource. Resource is the database or data file. And there is not only special information data, but also many other data. Without text data, PDF data, or cumbersprite data, file data. And in United Kingdom government site, there are... If data is a special data, in this site, we can search data, drawing on map. And this is the state government site. This site is using SIKAN 2. And in this site, we can search a dataset with a special extension and draw such special data on the map. Government of Japan also began to make such a dataset site using SIKAN. And data... This is a geo-special authority institute site, only a special data site. But Japanese government site has no maps. In government site, have special data. But we can't see such data on map. Only metadata, text data, who make this data and how to use this data format. But we need such special data to see on the map. Government government is promoting projects to utilization of digital special information in parallel with open data. This is another government promotion. This is Japanese government project budget from 2013 to 2020. And all text is Japanese. So many people can't understand this. Only he can understand. Overview of the plan of government. Launch an organization to operate it by creating a foundation system, fundamental system of special information and construction of disaster prevention system and improve the use of IT in agriculture, forestry and fishers. And activate the local activity and to expand the project that use special information technology to upload. Japanese government plan about special information from 2013. And this talk, I described the presentation about the Mac 2GRAC, launch the organization and creating a foundation system of special information at Japanese government. 2014 to 2015, Ministry of Internal Affairs and Communications Japanese government has developed a system named G-Space platform, G-Space, perhaps G means geospatial. This is such sites image. And this project has this group gave the system development work and national institute of information and communications technology. And this is a private company, big information technology company. And the University of Tokyo, this is Japanese university. And I did the work of creating a mechanism to use the special information in combination of the open source software in this team. This two year I worked in this team using how to use force for G for that system. And this year, another ministry, Ministry of Land Infrastructure and Transport is preparing for the operation of this system. And last two years is development for that system for prototype. And this year, preparing operation of this system. And this year, Ministry preparing for operation of this system. And IGIT, this is a group of association for promotion of infrastructure geospatial information distribution. We make the operation of this system. And I'm also a member of IGIT. And I work for publishing this system in IGIT now. This is all the parts of this system. And this is a system configuration. Maybe you can understand. And front end is Apache and using Seekang and using Drupal. Drupal is a content management, famous content management system. And Seekang is a catalog server. And back end is a geoserver. Maybe you know, famous force for this, rendering and distributing OTC format data. And Solar is in search engine for the text. So in this case, Solar is query for spatial extension and using a post-glare scale and post this. Maybe you know. And Seekang has an interface to good one, OGR. This is a force for this. And this system is not same as Japanese government. All the system has a map. And using a reflet, this publishes a map. Using Seekang, Seekang as data catalog. Seekang is a world-leading open source data portal platform. And the U.K. government, U.S. state government or any other many government of U.R. using this system. And all the AD extensions for some of the spatial data has been implemented in Seekang. Using Seekang exit is extension spatial and Seekang exit high-fone geo view and recline view Seekang exit under view of base maps. And for what describe each extension's function. And Seekang exit spatial has made Ad-Seekang about spatial data, spatial data metadata and implement Seekang exit spatial. You can search metadata using area or format. And Seekang exit geo view can make a function. When you open the special data resource in Seekang, make a preview at maps and recline is a famous JavaScript library using search as a commercial value data. If commercial value data includes a coordinate, in Seekang, recline view display its data on the map. And Seekang exit base map is using base map for Seekang exit geo view or recline view provide maps as a base map. This is our prototype systems. And function that we have extended is here. Improved the preview function of spatial resource and implementation of the spatial data processing function using the good old GR from Seekang. And implement the web map to display selected spatial resource in Seekang. And made the ability to purchase application by selecting the spatial data product with private companies. This site has not only government data but also private company data such as product. And there is the ability to, if people using this site who want to buy such a data from a company, they can find data. But this is a private company's data. If they want to use that data, they must pay. But how to get such data introduce this site. Improved preview function of spatial data. Back to WMS preview, WMS data has preview function at geo view and all the special Seekang exit special. But that has many bugs. If we display such a preview display. Next position is not drawing. And preview KML and GeoJSON and CZML. KML and GeoJSON is famous. ZML is data format for Sezium. That is XML format and coordinate and timeline data. And use GSI geo special institute, Japanese government provided map data. Map as background map. And next implementation of the special data processing function using the guudaru or guudaru. And how to use. Select the special resource in Seekang. And in Seekang's display, we can use the processing method and specify the parameter for the processing. And execute if we can, okay, we click okay, execute processing. But some data, if big data using processing, we must have a very long time waiting. So that is not good for using such a web application. So we thought start in execute, change as Seekang government and store the result in the web public directory of the server. This process is a background batch processing. And send the result of the URL by email to the user. And user open the URL and download the result. If there are some, any error occurred to use a sender electric mail about your error. So user is waiting result of good result or bad result using electric email. And next, implementation of the web map to overlay displaying selected plural special resource in the Seekang. This is a display of the web map implemented using a reflet. And this is a left is a display of the Seekang dataset displaying. So maybe you only Japanese. And here, this button add map. So there is a resource that has a map add button. And another resource has not have a map add. So data type, some data type can add map. But another is not ability to add map. So we can make some maps, click add map, and then open a map, map, map, okay. And then made the ability to purchase application by selecting the special data product with private companies. Many of the private companies in Japan have sold the special data such as satellite image, IR, laser measurement data, navigation log, and so the position of the mobile phone log they sold. And you can find them in our site. And you can also make a purchase application of the data. This is a display of such. And select where you want and make an ad work card and make an estimate. You can. The future plans, we are working on trying the public of the system of this year in November. It is also plan use paid data of the private sector in this time. And we are trying to implement a preview and overlay display of the various, various things with time. I am a member of the QGIS community. And I want to talk about QGIS. We can use a wonderful plug-in called the second browser in QGIS. In QGIS, all Japanese. And in this, using this plug-in from QGIS to second direct to connecting and can search. And we can using a second data using QGIS directory. So I think QGIS is excellent tool in order to use such a data repository. Thank you for your attention. Are there any questions? Okay. Hi. I don't know which end user can get format special data. Is it file or some service or something else from your system? Data format. Data format. Okay. And data format download from SIGAN. SIGAN has any format put and download on the file. And we can provide preview using a few format. About WMS, WFS, KML, GeoJSON, RasterTile and CZML. So we extend about format number. What? Do you directly load it in SIGAN or in GeoServer? GeoServer. Oh, SIGAN has file storing file as a local file system. And Amazon S3. So we put this in this system, storing a file, Amazon S3. And another not a file is WMS is not a file. This is a web service. Web service is another. So it is loading the sql file stretchedals andridge config. Can I use it? It is a dataset and dataset has a file resource and this is a WMS resource and a title resource. So one dataset have a several type of data. Some data is a file, some data is a store, post-GIS and using a geosurfer. And second can register resource as well. Not in this server, in another server resource link. So many kind of data can store. Am I right with file storage, geosurfer storage, post-GIS storage, AWS storage not connected. If something change you need reload it. It's not the common storage for each data. There are management in this server's data is have a... No illegal change. But if there are some link from another site, maybe sometimes happens such a illegal change, I think. Is it okay? Okay. Sorry, I'm not good at using English. So please, yes. Any additional questions? Did you want to say something? I will ask you in English. Please use Japanese. Okay. Can I speak? Well, pretend I am a provider of a data. Okay. I'm like a company and I want to put my data into this platform. What should I do? Okay. This...and you put...you want to put your data in this system to tell manager of this site. And so it is not automatic registration. You cannot automatic registration. So your message, read managers and if they think okay, you are a good man. So you can register your data here. Okay. Thank you very much, Yuchi. Thank you. Thank you. Yes. Thank you. Okay.
The Japanese government has begun to create and execute a plan to take advantage of the combined spatial information and ICT technology. From 2014 the Ministry of Internal Affairs and Communications has developed the G-space platform for circulating to collect spatial information of the public and private sectors. I have participated in the development to use the FOSS4G in this system. The G-Space platform of Japanese Goverment is based on data catalog system configured with CKAN and Drupal. We improved spatial information data preview feature in CKAN. We created additional data processing functions for the space-based resources in the CKAN. We did a feature extension for space resources of the selected in CKAN to be displayed using LeafLet . Also we use the GeoServer + PostGIS as a back-end to provide spatial information. Even if the use request to this system becomes much, as can be supported by clustering the back-end system. I'm going to the explanation how we had used the FOSS4G in this system.
10.5446/20368 (DOI)
Okay. Hello ladies and gentlemen. Welcome to the second morning session in the tunnel. Very nice here. Not much air. It's a very interesting session I think. I found the three buzzwords cheap, economy and commerce in the titles of the presentation. So we said it's a business session now. So it's very sad there are just a few people here I would say because open source is business as we knew. So now we can hear. Okay. And we will hear the first presentation from David Curry and I'm very interested. Okay. Great. Can you hear me? How about that? Thanks for coming out. My name is Dave Curry. I'm president of Geoanalytic. We're a consulting engineering company based in Calgary, Canada. My partner Brent Fraser couldn't make it but he's responsible for all the questions that I can't answer today. So what I'm going to talk about today is icebergs but the context is going to be remote sensing and satellite imaging. So satellites, as everyone is aware, provide us with a really good archive of environmental information and it's growing all the time. But one of the key problems that we run into is extracting good quality information. Certainly crowdsourcing is a good way to go for some applications. For example, OpenStreetMap has used satellite imagery and aerial photography to map large portions of the world from a crowdsourced perspective. But when we look at environmental issues, things where the image is a single record of what we're trying to record, then the actual quality of the interpretation is a much more difficult thing to quantify. And also there's less motivation for people in the public to take part in providing us with interpretation. So what I'm going to talk about is a method that we've used to collect data for an environmental project from satellite imagery and confirm that the quality of the results is high. So my project was looking at icebergs and just to give you some background on icebergs, I'm sure a lot of you are familiar with what they are, they are large pieces of ice and they're much different from sea ice, which is what you see, this gray stuff here, in the sense that sea ice is formed from freezing seawater. So it's got salt in it, it's formed at low pressure and at fairly high temperatures, whereas icebergs are formed at high pressures, low temperatures in at the tops of glaciers and they're formed from fresh water, they're formed from snow, so they're much, much stronger and thicker and they typically can be 10 to 100 times thicker than the surrounding sea ice. Now, because of all the issues that icebergs present, there's lots of different studies that have been done and we've got a whole classification scheme for icebergs, not just in terms of their size but also what they look like. And I'm just going to gloss over that, but I think it's important to point out that it's difficult to look at an image, particularly one from space and say, that's an iceberg, that's sea ice, that's an island and so on. So I just want to briefly talk about what we're doing. I can't talk a whole lot about who we did it for or any of that kind of thing, but essentially we're trying to minimize risk. When people are working offshore, whether it's shipping, exploration, building facilities and so on, the presence of ice is a major concern. If you're a marine engineer or an offshore architect, the presence of icebergs and the size of them is a major consideration. So what we wanted to do was get historical data for this particular area we were working on and boil it down to a particular statistic that we can graph, essentially in terms of length, we call it a probability of exceedance curve. And it basically tells you what your probability is that you'll come across an iceberg of a size greater than whatever this length is here. You can use that in your design calculation. If the probability of exceedance curve tells you that you've got an 80% chance of encountering an iceberg that you can't handle, then you might want to change your design. So our project location, way up north, very interesting area off the east coast of Greenland. The top of the box there is basically the Fram Strait and that's the area between Svalbard and Greenland. And that's where most of the ice from the Arctic exits the Arctic Ocean and it comes streaming down the east coast of Greenland. But there's not a lot of icebergs in that ice. There are some extreme features. There's some old ice, but a lot of it's basically the kind of stuff that your typical icebreaker can handle. The icebergs come off the glaciers on the coast of Greenland and they go up to sea there and float down towards Iceland. So what did we do? We looked at a 12-year period, 1999 to 2011, and we tried to, because our clients were cheap, we tried to minimize the costs that we incurred both in terms of satellite imagery and also in terms of interpretation and analysis, but at the same time we had high standards. So we used Lantzat 7. This is a pretty good image. You can see we've got an iceberg trapped in a piece of ice flow there. It's useful at 15-meter resolution once we've combined it and the color combinations with Lantzat are excellent for this, when the lighting is good. We also used Aster, which is a sensor carried on the Terra satellite. Also 15-meter resolution, not quite the same color capabilities, but we used quite a few scenes from that. So I think the underlying theme here is collect a lot of imagery, put it together and allow people to interpret it online. In order to fill in some of our time and geographic gaps, we actually did purchase some imagery from the Japanese ALOS satellite. One sensor, the ABNIR-2, is very similar to Aster, a slightly better resolution. We got a couple of those scenes. We also used the Prism sensor, which is a stereo 2.5-meter sensor, and it gives us very good, these are a little washed out, but very good indication of these icebergs here. Just by way of context, those icebergs are grounded in 70 meters of water, and they're holding up all that packed ice. This white rubble field that you see here is the ice that's crushed up against the iceberg and is being held back, anchored as it were. Here's another image comparison, so this is a very deep close up of a Landsat 7 image, and you can see one of the challenges that we have is these SLC off breaks in the image. If you're familiar with Landsat 7 imagery, it had a minor failure in about 2003 that led to these gaps in the images, making it a bit challenging to interpret. What we're seeing here is some more icebergs also anchored, and here's the ALOS imagery, which gives a very good example of both tabular icebergs, and this one here has a peak on it. I just want to point out that interpreting these icebergs is not trivial. It takes a lot of understanding of what's going on with respect to how these things form and how they act in the wild. Just a couple more examples. Here's another SLC off image with a whole cluster of icebergs with surrounding sea ice, and you can see these blue features here are ponds of melted water sitting on top of the ice. These are some of the training things that we provided to our crowd sourced interpreters, who I'll talk about in a second. One last example, here's another large iceberg grounded in fast ice, and three years later, it's still there covered in snow. Just to give you a sense, this is one of the reasons that automated interpretation did not work very well for this application, which is why we had to turn to manual. How did we do this? Well, open source all the way, we gathered our images from a list of hundreds using grass, GDAO, QGIS, corrected them, color combined them, tiled them up, and then we served them all up using a stack with open layers and geomodes. At the back end for CRUD, we used Django, and for reporting and analysis, we used R. So here is what our user interface looks like, and it's basically a digitizing capability. It uses a map server to serve up the tiled images, and it uses geomodes to provide digitizing capability. This is the Django side of it, which allowed us to very quickly assess the results that our interpreters were providing. We had an expert interpreter, well, we had two. One is a consultant, one felt it did all the work, who would go through these and essentially score them, and that gave us the ability to provide feedback to all of our interpreters regardless of where they were or what they were doing. The other thing that we did was we provided them with an interpretation key, and we built this up as we went. This is actually, well, this is a live example. So these are some of the features that were observed and provided the information about the size of it, what the image was, and also a comment about how it appears and why it looks the way it does. That's very important, I think, to give this sort of ability. So this was a live document that got built as we went along, and one of the things we saw here was this ice island, or an island, this is not an iceberg, it's actually an island that was not on the charts. It got interpreted as an iceberg several times. I'm going to go in here. So our interpreters, we recruited them from universities, and there were other people who had sent their resumes, essentially. We selected them because they were interested in GIS, but they had no experience with sea ice and some only minimal interpretation and experience. What we got them to do was analyze as much of a scene as they could and outline that area, and then digitize any observed icebergs in there, then mark the images as completed. We only paid them on the basis of icebergs that they digitized that we approved of. So in other words, if they digitized a piece of ice and we said that's not an iceberg, they didn't get paid. They did get paid for completing an image. So if they looked at an entire image and there were any icebergs, they still got paid for that. This fit into our approval process where our expert interpreter would look at all of their results and score them and add a comment if necessary. So if they digitized an invalid feature, that was of no value. If he accepted it, then that went into our database. But he could also score it as fixed geometry or fixed type. So if they classified it incorrectly or got the wrong outline, they could go back and fix it. So assessing their performance, we actually had to deal with on two levels. The first one is quality. So in other words, what is a rejection rate? How well did they digitize based on the quality of the polygon? And we did a fair bit of analysis. We actually provided them with a sort of a daily score of how they were doing. Because it was a day late because the supervisor had to look at their work. But overall, we had a rejection rate of less than 2%. That table is not that bad. What you can see though is that we got down to four interpreters. We started off with quite a few. I think 15, there was a few that never actually interpreted anything. So we had a lot of people self-select out of the program. And at the end of the day, there was really three that did almost all the work over a period of 12 weeks with a couple of weeks off. And there's the rejection rate. You can see that the geometry was usually pretty good except right at the beginning there was a few bad geometries. A lot of not valid features interpreted. And that I think relates back to the difficulty in doing this. But overall, we had I think a very good result. And we ended up interpreting some 47,000 icebergs. The other problem, of course, is completeness. In other words, these guys look at the image and they miss icebergs. How do you detect that? Well, the approach that we took is essentially a single blind test where we said, OK, give the same image to multiple interpreters and then compare the results. Which was a challenge to analyze. But we worked through it basically using the spatial analysis capabilities of R with post GIS. And we did about 10% of the images. And what we found in that was that for small features, it was very difficult to rate these guys. But for areas that were large enough that you couldn't miss them if you were doing a proper job, we found that there was less than one miss per 100 square kilometers. The other thing that affected the results was the SLC off artifacts. There was a lot of issues where the interpreters would not cross the gap to see to match up the icebergs on either side. And there's a rather dense table listing all the results that we did. We only actually compared the four. We only actually compared the four main analysts to do that. So cheap, but good. OK, the bottom line is the results were very good. And so the business case for this approach, I think, is strong. What we did, the approach to it, was to provide a remuneration scheme to the interpreters that motivated them to do the work, but minimize the amount of cost that we incurred. So we didn't have to provide them with office space. We didn't have to provide them with software. All the software was hosted in-house. And at the same time, they got valuable experience and guidance from a very skilled image interpreter. So there's basically payment both ways. The trick that I would say is that targeted recruiting is very important. You need to find people that are motivated to do it. I mean, when we look at something like OpenStreetMap, people are doing it because they live in their neighborhood. When we look at icebergs off the coast of Norway, people are only doing this to get paid. I mean, there might be a few iceberg enthusiasts out there, but we haven't found any of them. The performance base remuneration, that has to be carefully tuned. I think we did a good job of it, but to be honest, if you ask me what I paid people, I can't recall. And the other thing is that the expert reviewer has a lot of work to do. Because these people are working evenings and weekends and they need instant feedback, that guy is basically, it's best if he does a couple hours of work every six or eight hours. So it might be good to have more than one person, depending on the size of the project. So there's just a quick listing of the open source tools that we used. I think all of those tools adequately and more than adequately provided the results that we needed. And basically, this could be scaled up to a much larger project. And thank you in any questions. Thank you very much. That was a very great and interesting presentation. It makes us remember that we are still doing geography and it's nice to see something of the real world in an IT conference. Are there any questions? So thank you for the presentation. That was a pretty interesting approach. I was particularly interested in the crowdsourcing and what sort of channels you used to actually get the people. First question, the second question is how much did you pay them? What was the motivation scheme? And the third question is why didn't you use something like Amazon Turk if you are just valuing, basically basing your incentive just on monetary rewards? Those are excellent questions. Okay. So the first thing that we did was contacted geography departments at several universities and put postings on their websites. That attracted 60 responses of which there were about 20 that got serious. We could have done more. We didn't go that far afield looking for people because in the end we had a face-to-face training session where I brought everyone together and gave them hands-on training. We could have done that remotely. Second question is what do we pay them? And I can't remember. It's been a couple of years, but the bottom line was that there was two people that worked really hard at this and they estimated that they were making about $60 an hour. So when you get good at it, you can click through a lot of icebergs. Canadian dollars, right? Yeah, okay. So devalue that a little bit. That's what we're paying in Canadian dollars. The last question is why not mechanical Turk? And we did look at that, but there's a number of issues with respect to quality control and secondly with respect to delivering the data to mechanical Turk, the way that the Amazon system worked. We decided that from the... And it was early on. We'd never used it. We'd only looked at it. It made more sense to us to build our own system because we had most of it in place already. Okay. Thank you. Other questions? Yes? Thanks for that talk. It was really interesting. So you say this could be scaled up if you were scaling into a much bigger project. Would you... Didn't you try and persist with having an expert verify every single thing or go from a more statistical approach to error rates? I think we could use a more statistical approach, but actually the expert assessment approach worked pretty good. With the Django interface, basically we use the Django template language to just create a map server, map file for every little image and it pops them up. You can scroll through it really fast and once you've been doing it for a while, it's not that much work. It's just that feedback issue that's more of a challenge. I can see where you would want to do that. There are some opportunities, I'm sure. Okay. Some other questions? Fine. Thank you very much and we have a small pause now.
Satellite image archives provide a wealth of valuable historical data that can be used to assess changes in the environment, but extracting high quality information can be costly and time consuming if we restrict the interpretation to experienced image analysts. We attempt to reduce these limitations by crowd sourcing the interpretation process via a web based digitizing system based entirely on open source tools. This approach can lower project costs by eliminating the cost of office space and equipment for the analyst, as well as allowing flexible working hours and locations. The challenge with this approach is to ensure that the quality of the interpretation remains high. Within the context of a project to model historical iceberg occurrences off the coast of Greenland, this talk will discuss the methods we have implemented for quality control while providing training and feedback to our analysts from an interpretation expert. The business case for this approach will also be discussed, including the risks and rewards of paying interpreters for each correct feature digitized. In our case we were able to quickly and accurately interpret several hundred images resulting in the measurement of tens of thousands of features. By using cloud based image archives and client/server strategies, this approach can be economically scaled up to much larger projects.
10.5446/20367 (DOI)
Hello. Yeah. So welcome everyone to this session here. Where are we going to go into the clouds, but not up there where there's none, but into the point clouds, starting with Mike Smith, who's going to present PDAL. Just out of curiosity, how many people are using PDAL or Poodle or PDAL now? Okay. How many who haven't used it at all? Okay. Kind of an even number. So we'll kind of cover some advanced and cover some beginning stuff too. Just for those of you that are curious, this is a point cloud scan in the thermal range using a FLIR camera of Palmer Station in Antarctica. Just one of the places we've been able to scan. So a little bit about Poodle or PDAL, however you want to pronounce it. It's a BSD licensed library. So the most permissive available. We do support proprietary plug-ins. There are a number of proprietary plug-ins already, some of which are public and some which are not. It's a C++ library and we have a GitHub repo with pull requests. Gladly welcomed. We've had three official releases. The 1.1 release last year, just after Phosphor G Sol. We had a 1.2 release earlier this year and we're just going into a 1.3 beta release right now. So those of you that want to get out there and start testing things, find bugs, we welcome that. So the way Poodle is set up is very much like Git with a single Poodle command and then many sub-commands that run off of that. But Poodle is primarily a translation engine. You can see we have a large variety of readers, a large variety of writers. There's a reason the name Poodle is very similar to GDAL. It's very much inspired by GDAL. Howard Butler is a current committer on the GDAL project. So a lot of things are structured very similarly. It's intended to really be the GDAL for point clouds. And you can see we've just in the last release upcoming, we've added a bunch of new formats. Some of them scientific formats like Elvis and Icebridge that are primarily of interest to NASA. A new text reader that will help you get those XYZ text files into LAS, LAZ, something like that. And a couple of new writers, one of which I'll discuss. But the heart of Poodle is the filters. These are really the processing power of Poodle. A number of them are basic operations like filtering, splitting, chipping, sorting, things like that. Some of them are more noise processing like the Poisson filter, the statistical outlier. They're designed to really allow you to clean up your data. And then there are the main Poodle applications. These are kind of the top level commands. Some of these are basically wrappers around certain filters. For example, the Poodle ground command. This is just a way to get access to the filters.ground in a simpler manner. Some of them are very similar to GDAL. So there's Poodle translate just like there is GDAL translate. It's your basic command for changing file formats, doing minor changes to your files, other things like that. There's utility formats like Poodle T-index, I should say. Very similar to GDAL T-index. Building a tile list of all your files and allowing you to operate it on a single file once you've built that. And then there's Poodle pipeline, which is the main power of the Poodle processing engine. Here's what's really new in 1.3. JSON pipelines. XML is gone. Well, it's actually not gone. We'll preserve it around for two to three more releases. So XML is still supported, which is good because we built a lot of our applications around the XML pipelines. But we're in the process of converting over to JSON now. And the JSON pipelines make things much simpler. As we go through here, I'll show some examples of the Poodle pipeline in XML and then the complement in JSON. We have an enhanced derivative writer that allows you to write out things like slope, aspect, contours, hillshades. We have a bunch of new analysis filters that I briefly mentioned, the new T-index reader, which now gives you ability to do merge, clips, and filters right on a large number of files all with one command. Like I said, we have a better text reader now and improved argument validation, so much better feedback at the command line when you're filling out your arguments. And thanks to Connors work, transparent S3 URL handling. So if you store a lot of your data in S3, you can just reference an S3 URL in the command line. By the way, if people have questions, let me know during the talk. I'd rather do them when we have the context. So just give me your question and I'll repeat it for the audio, but we don't have to wait until the end for questions. Poodle ground, something that people like to do a lot with point clouds, classifying point cloud into ground and non-ground points. You can classify it, so write into the classification routines, or you can actually remove the points directly and make a smaller point cloud. Some LiDAR formats don't support classifications, so you actually have to remove the points. Things like LAS do support classifications, so you can just mark those points as ground. It uses an algorithm from the point cloud library that gets compiled into Poodle, the progressive morphological filter. And there is an approximate version, so when you're trying to play around with the parameters on Poodle ground, you might want to try with the approximate version because it's much faster to operate and then take approximate true once you get things narrowed down. So here's an example of a point cloud from Sitka, Alaska. It's, as you can see, quite heavily vegetated, and this was an area they wanted to produce a digital elevation model of. So we ran the Poodle ground filter on it, and now we have a bare earth point cloud. So this is the kinds of things you can do with the ground filter. Poodle Info is very similar to GDAL Info. It gives you, you can get basic summary information about your point clouds. There's several other options besides summary, things like just looking at the metadata, looking at the stats, as well as the ability, this is an overview of this area, the ability to get a boundary file for your point cloud. This is what's also run when you're generating a Poodle T-index, and it stores the boundary in the GDAL shapefile or whatever GDAL format you support. So by default, the parameters are fairly coarse. You get a very coarse outline for your data, but you can change the option to it, to the boundary filter, and get finer and finer boundary calculations at the cost of going through more data and doing more intensive calculations. One thing that's new is the boundary, the Poodle Info when you do a boundary, returns back the density because it's calculated the area and it knows the number of points. It will, however, vary based on the boundary that it calculated. So if you do a very coarse boundary, you might have a lower density than you do with a very fine boundary. Poodle Info can also give you information on specific points. You can drill down to your file and find are there bad points here, what are the specific ones, and then potentially remove those points. There's a split option that allows you to split very large point clouds into a series of tiles, either based on capacity, in this case, each of these point cloud tiles here is approximately three million points, or you can also specify in terms of length, and then you have approximately equal size tiles, depending on how much area you actually have, but they're going to vary in terms of point counts. Poodle T-index, very similar to GDAL T-index, allows you to run through a whole series of files and create a single OGR-compatible vector format that will store the path to all your files as well as a boundary for each file in the geometry column. Then this allows you to use this T-index for a variety of different operations, filtering, merging, clipping, without having to specify all the files that you put into it. You just reference the one single file. Here's an example where I took the split files, added a polygon geometry around it, ran this command, and I get the merged, clipped point cloud from the results of all those individual files. Poodle Translate is the main point cloud translation method, basically input outputs and then whatever different advanced features you want. The advanced options are very similar to the layer creation option and data set creation options that you see in GDAL. It assumes certain defaults which you can override whenever you want to or need to. Poodle Pipeline is basically the full power of Poodle. This allows you to put all your commands into one specific script and run it across one to many files. It allows you to run through the data just one time while doing multiple, if you need to, filter operations, multiple read operations, multiple write operations. In the end, most operations in Poodle go through a pipeline, whether it's transparent to you or not. Here's an example in the old XML format. Things start from the inside out, so they start with a reader, then go through a filter, this case a range filter, and filter between 0 and 99,999, and then in this case write out to a last file. Notice there's no file names in this particular pipeline. In this case, I'm specifying the file names on the command line, so I can keep this pipeline and just run through this and pass multiple file names. But XML is deprecated and now we're on to JSON. So you can see that the JSON format for specifying the pipeline is a lot simpler, a little bit less typing, definitely less angle brackets. And just like the colorization filter in XML, now in JSON, you can pass any GDAL compatible raster format to a point cloud. It does have to be in the same projection as the point cloud. And colorize the pipeline, basically it'll set the RGB values for those dimensions. Or you can just do it all at the command line, so there's multiple ways of doing the same thing. So here's an example of that particular raster, Sitka Alaska again, that I just pulled from the USGS. And then once you apply the colorization filter to your point cloud, you now have a colorized point cloud. All these images, by the way, were taken from cloud compare, which is another open source point cloud viewing engine that uses liblast to actually render things, doesn't use poodle yet, but that's forthcoming. Here's an example of using the reprojection filter, which is a common operation for point clouds. Again, this is the deprecated XML format, now in JSON. And you can specify your parameters at the command line, or you can even do a batch processing with XRs and do a whole bunch of operations. Another common thing to do with point clouds is generating digital elevation models. We have the points to grid output writer that we use that was provided to us from the open topography group. In this case, I've actually have the file name and the input and output file name in the point cloud, in the pipeline, which that's XML. Here it is in JSON. So running this pipeline command would run, read this file, filter it based on the classification. Classification 2 is ground, so it's going to cruise a bare earth digital elevation model, and then write it out at a one meter grid distance using in persist waiting output to a TIF file. Or here's the same kind of thing done with command line parameters rather than specifying in a pipeline. And you can even turn a series of command line parameters into a pipeline with the dash dash pipeline command. Other highlights about Poodle and the 1.3 release, we've added a new filters dot height that allows you to calculate normalized heights on a new dimension. One thing about this, though, it does have to be, it does have to have a ground classification, because otherwise we can't really tell where the base ground elevation is to calculate the normalized heights. We've added multiple thinning options. So we used to just have random, I believe it was, and now we've added Poisson and voxel grid thinning options. The filters dot attribute has been enhanced, so now you can use OGR features. So say you have a vector file of building footprints or other things like that, places you want to mask out, you can assign classifications from that vector format and apply it onto the point cloud data. And we've added a new density kernel command to look at density files. So here's an example of running the density command on that point cloud. These are just individual hex bins that are calculated fairly crude, but you can see that most of the points are captured along the hillside and down along the shore that really weren't too many points captured. Poodle now has a Python API available via PyPy. The ability to get your point cloud data into a NumPy array is as easy as four lines of Python now. Just open the pipeline, execute it, and then read it into the array. So very simple now to get your data into a NumPy array for analysis. We've refreshed the Poodle documentation. It's now in read-to-docs format with download options. The content has been completely reorganized and I think it's a lot simpler to read. Recently Howard gave a workshop at one of the NGOs and we have that on the website now, so you can download and work through your own workshop. It's 100 plus pages utilizing QGIS and Docker and goes through all the basic capabilities of Poodle. So you don't even have to pay for a workshop, you can just download it and run it. And we've added a bunch of new tutorials to the website. Poodle releases, source code will always be at poodle.io or clone the GitHub repo. The recommended way we have for people to get Poodle now is Docker. Docker is the fastest way to get Poodle. Just do Docker pull, Poodle, specify the release or leave it as latest and you're going to get an up-to-date runnable Poodle. We also have a dependencies Docker image that's really nice if you want to build your own Poodle image. So you want to compile in some custom argument, custom options, some different capabilities. All the dependencies are packaged together for you in a single Docker image and you can just use that as the base for your Docker. RPM is available, Debian unstable is available for Poodle. However, OSGO for W, we kind of need a champion. If there's any of you that want to work with Windows and want to build Poodle for us, contact us and let us know. And that's it. Okay, thank you, Mike, for this great overview of Poodle as you call it. Any questions? Yeah, Mike, does the Docker image contain all of the dependencies for all the optional things like the points-to-grid library and such? All the basic ones, yeah. There's some things that doesn't contain Oracle, stuff like that, but Postgres is there for PGPoint Cloud. Anything that's publicly available is available in the Poodle Docker image. There's some things that are non-distributable like the Oracle incident client, stuff like that that's not included in that. And one more question. The concerning calculating the density, the hex band. I tried to do that and it shows a visual representation, but can you do that to get what the point density is per meter square, let's say? The Poodle info command will give you the point density per meter squared overall for the whole file, not for individuals. So you can't make an image of the point density. You could because each one of those hex bins does have an area and it does have a count so you could calculate it yourself. Okay. All right, thank you. In the back. Okay. Thank you first. It looks amazing for me. This is just awesome. Maybe two quick questions for the conversion to DEM when I have a bit of sparse point cloud. Is there some form of interpolation or something like that? There is. I know currently they're being worked on as some newer improved digital elevation model calculations that are better at handling sparse data than the points to grid algorithm are. Those will probably be in the 1.3 release. They may not be fully documented at 1.3. They're kind of alpha level quality. So keep an eye on that and make comments in the GitHub tracker or other things like that where you find issues. The points to grid writer will do it, but it does have some issues sometimes. Okay. Okay. Thanks. And the second probably just shot in the dark, but when you talked about coloring point clouds, is there an inverse operation like from colored point cloud get the roster with the image actually? You could probably write out the XYZ RGB arrays and convert it from a, you know, basically you have to convert it back to a gridded format like the points to grid and then just colorize that from the XYZs. So there's nothing quick to do that, but you could do it manually. Thank you for the presentation. We've seen a boundary approximation. Is there something like a 3D representation of generalization of a? There isn't. It's 2D only currently. It's 2? 2D. Yeah. The boundary calculation. Will there be one? Yes. If you would like to submit a pull request. No, we could certainly take that under advisement and work on that. It's not something we currently have in the pipeline right now, but it certainly is something that could be added. Okay. Any other questions, remarks? I think then thanks a lot, Mike.
An introduction to the PDAL pointcloud library, how to accomplish basic data processing, read/write files and how to scale to do batch processing. Also covering the use of PDAL docker images for quick installation. Also covering various PDAL plugins, optional drivers and connections to other projects that use PDAL.
10.5446/20363 (DOI)
Ladies and gentlemen, welcome our last speaker in this session with another drone topic, Aaron. Can we start? Yeah. Okay. Hello, everybody. Thank you for joining on my presentation. I hope you're not too hungry yet, especially the people at first row. So my presentation will be about two tools I created in the last two years. Oh, this one's still off. The first one was at the University of Hasselt in Belgium. It was a tool directed at consumer public, so consumer drone flyers, just people who buy something like a parrot in the store and want to start flying them. The second tool I created at Heo Solutions, a Belgian company, and my current employer. And that was a tool directed at business users. And this track of events led me to write this moment at the Fossfersje conference in Bonn. So my presentation will be structured chronologically. First, I will be telling why I'm here. I will provide some context to both of the tools. Then I will very shortly show you the first tool of how it all started, the tool at the University of Hasselt. And next, my main topic will be the tool I created at my current employer, Heo Solutions. So during my studies at the University of Hasselt, I met with Professor Johanna Schoening, who was concerned with research towards space usage rules, rules such as no smoking, no fishing, no swimming. And he found that although you see these rules quite frequently, they are not only quite rare on recent mapping tools like Google Maps, OpenStreetMap. You won't find them very much. There is a way to include them in OpenStreetMap, but it isn't used at the moment. So at the same moment, drones became trending. And in the news, there was all about a drone flying too close to an airport and it's becoming dangerous. So I have to replace the outdated rules by new drone rules and start putting in safety measures to control these flying objects. And I differentiated two different tools. The first one are the general rules. The rules for like a whole country or a whole city with no spatial component on them, like a drone license that is required. And the second one are the space usage rule kinds of rules like this one. So rules that are dependent on the features in the area where you're flying. But they are a little bit more special than normal space usage rules just because of this component. You can take a building with a no smoking sign and you know you can smoke in the whole building. And because the radius 500 meters of industrial buildings, you can't find that on things like OpenStreetMap. So we have to find special ways to visualize that on a map. So my idea at the time was to combine these two and combine the research with space usage rules to create a complete tool for drone space usage rules. And to create the tool, I needed three main pillars of the tool. The first one was a way to collect the rules because in all the countries there are very different rules regarding drone flights. And you will have to find a way to integrate all these rules in my tools. Second was the data modeling. We'll have to find an interesting way to model the data in the database so that we can reason with those rules. And the last thing is we have to visualize all the rules, not just the space rules but also the general rules without overloading the user with information so that he just knows if I want to fly there, I need this permissions and I need to fly like this. So my first tool at the University of Hasselt which was directed at consumer flights. I will introduce by these three pillars I just showed. But first, all the rules you will see on the screenshots are for demo purposes because it's easier to use simpler rules. So don't start flying with the things you see on the screenshots. So in the first tool, we created crowdsourcing methods to collect the rules. These could propose rules by constructing a sentence with these parts. So for example, if a user combines do not fly a drone within 500 meters around airport, you can send that rule to the server. And then the server would automatically generate a layer for that rule. So you can see here, oh, I lost my pointer. Here is a layer of 5 kilometers of runways in this case and there the layer is visualized on the map. And next to these layers, we also have a draggable marker which user could replace to the location he wants to start flying. And then based on visibility and other factors, fly zone was created and the general rules and some extra information about the fly zone were shown. So after I created that tool and finished my master thesis, I started working at Heo Solutions, a Belgium company which combines ICT and Heo to provide solutions for the clients. They were also one of the companies which collaborated in the first official drone project in Belgium, of which I have more information right there. If you would like to get a paper, you can come and get one after the presentation. But at Heo Solutions, we have to rethink the tool because we are now looking at business clients, people who do professional drone flights, and they start by receiving an assignment. So for example, you will have to fill in that area or you will have to do a checkup on the windmill to see if there are some cracks. After that, the flight planning is done. So a flight pad is drawn and things like that. And then the flight can be done. Within the flight planning, the most important part for this tool, we found out a cycle. So first, they have to draw their flight pad. So the ideal flight pad to check up on the windmill or to fill in an area. Then they will have to research local rules and regulations because it can differentiate a lot from where you are flying. And afterwards, they will need to request the required permissions. But it can be that you don't get all the permissions or you will have to fly higher or lower. So and then you will have to start over again and do it all over again. And this cycle was quite heavily used in our application, certainly in the first part, in the visualization and interaction. And I will now show you a little demo of how the tool works. So what you're going to see is the user is going to create a project, a test project. And then you will start drawing his flight pad for the project. So now it's creating his project. And once the drawing is finished, you will see that there will quickly appear some rules and regulations that you will have to apply to in that area. So now immediately some fly zone information pops up. But that's not this important for this presentation. But you can also see a checklist of rules and regulations that the user needs to apply to. So the first one is a general rule because he's flying, in this case, in the Netherlands. You will need to acquire the required documents. The second one says that he's flying within 150 meters of buildings, which are marked here. And then you will have to request permission at the local authorities. You will get some email information with it. And you will have to inform the inhabitants of your drone flight. And the last one is that he's flying within restricted airspace. We're just kind of a 3D rule, actually, because restricted airspace has a height. And we used cesium to visualize those 3D rules. So as you can see here, the flight pad is drawn on a certain height. And you can also see the CTR, in this case, the control tower area, visualized in three dimensions with a height. Now we are going back to the tool. And we're taking back to the cycle. So suppose he doesn't get any permission for the restricted airspace, because it's too dangerous to just fly there. We can redraw the flight pad. And you will see how quickly the checklist changes. So now the restricted airspace is in the rules that do not apply section, because there is no restricted airspace on the flight pad. And the checklist is getting a little bit smaller. To show you how quickly the tool responds, I also made a third movie with another project. You can see here are four rules. But the user doesn't want to ask permission for the 150 meters of highway and the train rails. So he's going to redraw his flight pad. And while he's doing that, you will first see the train rails disappear. They're not marked anymore in the tool. As you can see once he's done that. So now these are already gone. But the permission is still needed, because these are still there. And now when he's going to switch with this one, these ones aren't marked anymore. The rules are in the rules that do not apply section. And he quickly knows, OK, no, I only have to redraw. So he's going to request these permissions. So in this tool, the rule collection is done manually, because we are providing a tool for businesses. We can't afford us to make mistakes. So we are using professionals to read through the rules of a country and extract the rules one by one. An example of a rule, which is also for demo purposes. I can say if it's a real rule. To fly closer than 50 meters of train rails in Germany, you need to have permission of a real networks administrator. So this rule, the administrator can input in the admin screen. What does he have to provide? And what is going to the database? The first thing is the application area. We have a list with application areas. In this case, it's our only country. But it could also be cities or for Germany, Bundeslanden. Deplication area is Germany. So a link to that area is put into the rule. The second thing, 50 meters of train rails. You search for a route up to date web feature service of it. Then we extend the database with the web feature service link, in this case the URL of the German rail network administrator and the radius to the features of that web feature service. We also put that into the rule. And then last, you will need to have the instructions. You will need to apply to once he starts flying there. So then we have our complete rule. We've now seen how you can interact with the tool and how the rules are put into the database. Now, how does the tool work internally? We've used only open source products. So as you can see, open layers. We're using a geoserver to put up our own web feature services if needed. And also a cesium for the 3D. But in this case, the flight path is drawn on open layers. And this flight path is sent to the server. The server then should first request the application area. This flight path is then drawn in Germany and in Bonn. So the application areas are Germany and Bonn. The server then needs to request the rules for the application areas. With a Postgres database, we can do an ST interact intersection between the flight path and the application areas. For Germany, it retrieves flight permit as required. And a permit is required to fly within 15 meters of train rails. And it does the same thing for Bonn. In Bonn, suppose there is one rule in Bonn which says that you need to acquire a permit to fly within 150 meters of buildings. Also this rule is collected by the server and all these rules are sent back to the client. The client then has a collection of all the rules, but it will still need to put these rules in the correct list. How does it do that? Well, the first one is a general rule which does not have web feature service connected to it. So the rule applies in the whole area and you can just put this rule into the checklist for that flight path. The second one is that you need to permit to fly within 15 meters of train rails. The web feature service gets a request, a get feature request with a CQL filter, driven 150 meters off the flight path. It retrieves all those features for train rails, but there are no features found. So we can just put this rule in the rules that do not apply box. And the last one is for the buildings. Suppose Bonn doesn't have a web feature service. We can set up our own web feature service with open street map data and a hero server. It is the same request and it founds 38 buildings. So the rule does apply for that flight path and the rule is sent to the checklist box. And then you will get an overview of how it looks in the tool. So you can see all the buildings are marked. The general rule is put right there. Then in the rules that do not apply box, you can find the train rails. To conclude, I will talk about our future plans with the tool. At the moment, it is an apart tool to show the instruction lists that the user has to work with. But it is also a very powerful rule engine. And we want to make use of the rule engine. A user probably doesn't want to draw its flight path twice because he will already have to do that in the tools like mission planner or free flight. Why not provide an API to them so that they can display their zones on the map and provide instructions right next to the map. And then the user will only have to draw its flight path once and you will get immediate feedback. Another thing you can do is why only use this for drones. When you request a building permission in Belgium, for example, you will get certain conditions you will have to do. For example, in this area, you can only build this high. You will have to have this kind of roof. You can't use a flat roof. And it is quite easy to do that in the tool. Just use other application areas, other rules, and it will work just the same. So that was my presentation. Thank you for listening. You can ask questions now and enjoy your lunch after us. Thank you for being on time. Any questions? So rules are constantly changing. How are your geo solutions working to keep updated, especially since this is a trending industry and things are happening very rapidly? At the moment, we are just focusing on Belgium and the Netherlands. And we are researching the rules there. But we will have to collect these rules and we will need to find experts in different countries just to keep up to date because that is going to be one of the big challenges of the tool. Well, in the previous tool, we used crowdsourcing and it gets updated automatically. But we find out at the university that not many people know the rules actually. So there wasn't much input on that. We only got a few inputs. So we will have to find experts to do that. Christian, what about non-professional users? Have you thought about making a map online so you can, for 17 years old, who is flying with the drones now in a non-flight zone? You mean for just consumer flights like the first two? Yes. No, because we are so busy with the second tool because Heosolutions has a lot of business clients. It is a consultancy company. So we are directed at them at the moment. But it could be an option in the future. I think that the most problems come from the non-professional because they are not aware when they are supposed to fly. But we are looking at our clients and our clients are professional flyers. So we will have to provide the tool for them. Sorry. Secondly, I know this guy here in the audience who has been trying to make the DSM so you can add where not to fly with high trees and so on. Have you thought about that? I'm sorry. If you add the digital surface model so you are not flying into trees or buildings, have you added that into your program? How do you mean a model that the user can self-add it's rules? The 3D. The 3D. Having a 3D model. Yes. I have added that but the 3D model is not complete actually because it just shows the rules in the 3D but the flight path is not in 3D yet because we are looking to integrate it in other tools and they already have the 3D flying tools. More questions? So another question about the use of a surface model. There are some countries like the UK and I believe the US where one of the restrictions is that the clear line of sight must be maintained with the operator. So if you had a surface model and you define the pilot's location you would be able to work out what line of sight is possible from that location. Is that something you would consider putting in? I have been thinking a lot about that rule because it's quite interesting and in the previous version of the tool I used the visibility of the rudder to use line of sight but we could ideally use things like 3D models to really calculate the line of sight but it isn't included yet because it will be a lot of work but it would be interesting to include that and I've also already been thinking about it. More? If it is not the case, thank you for your attention and enjoy your lunch.
Drone service providers are currently spending a lot of time on researching which permissions they need to fly their drones over a certain area. Today, most governmental regulations forbid to operate drones nearby transportation infrastructures or urban environments. In our talk we present a web application build based on open source tools to visualize such geographically-bound activity restrictions and therefore ease the process for drone service providers. The resulting system makes it possible for drone service providers to draw a flight path and receive immediate feedback on which permissions they will need to fly their drones in a specific area. A user is also enabled to edit the flight path to omit certain features and view live changes on the map and the instruction list. The project is implemented using a PostGIS database to store the space usage rules (SURs) (in our case the drone regulation of a specific country). A potential flight path drawn in an OpenLayers map by the user is send to the back-end which returns the regulations enforced in that area. In the front–end WFS-requests are performed to check whether the SURs apply to the specified flight path (i.e. when one or more features triggering certain rules are close enough to the flight path). Geoserver is used to create these WFS’s, the geometries of the features are extracted from OpenStreetMap. All instructions for the flight path are visualized in an instruction list linked to the maps highlighting the features in OpenLayers and in Cesium.
10.5446/20362 (DOI)
presentation two, two way data binding on mobile applications with Jager, after read it from my dear colleague, Anna, 20 minutes, five minutes questions. OK, thank you. So, like Axel already said, two way data binding on mobile application with Jager. My main goal was to develop a mobile application, but Jager is a framework as with parts from Angular, so you don't have to make a mobile application, but I think it was my main goal to build mobile application with it. So, at first I want to introduce shortly where group, my employee and myself, give you a little more information about the Jager project and explain what is two way data binding and then show you a few examples how to use Jager and in the end the project roadmap. So, where group, we develop modern web mapping solutions, the well-known map vendor and Metador and nowadays we have also the mobile maps on our portfolio. This is realized with Jager. And then myself, I'm a web developer and architect and I'm working at where group. So, about Jager, I'm one of the maintainer and two of my school friends started development with me, this library. Our main requirements on Jager was it should be based on open source project and other open source project, it should be integrated into a well-known framework and we want to release this project itself as an open source project. We wanted to make a hiking app for museum. It should be a hybrid mobile app, so we want to use it cross platform, we want to use it on Android, iOS, but also on normal web pages. Of course, for hiking app we wanted to use mapping components, but we don't want to miss modern architecture, a model view controller system, modularity and we want to have a good testability of our software. And because we want to make a mobile app, we want to fulfill the typical design paradigms on mobile apps, especially it should be mobile first, responsive design and of course user experience design, the typical text on this business. So, then we think about what technologies we should use or libraries especially. At first, for the hybrid mobile applications, we used Cordova for mapping components especially on mobile devices, we choose leaflet as the best and the modern architecture we wanted to realize with Angular and the typical design paradigms, we wanted to fulfill with Ionic because Ionic itself is an Angular project, so we can combine all these things quite easy together. But what is the role of Yager here? It's more or less the glue between Angular's two-way data binding and leaflet. But I don't know who knows something about two-way data binding. More or less half the audience. So, I just want to show you as a draft what is two-way data about in vanilla JavaScript and then I'll show you a short way how you can create it with Angular. At first, here we have some code, I can show you the example also. We have a model, it's very simple at this slide, and we can take the data from the model on the website. Wait, I'll show you the example. So, here is the data from our model and we are also with the eventlessness have the possibility to change it. But our data in the model doesn't change, it's still the data from the beginning. So, then I show you, dramatically, how you could do something like two-way data binding. It's a short way to show it. So, when we make here some changes, we see the text also changes in the view, but also in the model. So, it doesn't have two places to store the data, it's binded here with a simple getter and setter. So, you can maybe imagine more or less what is two-way data binding about. So, and the Angular way is very easy. You can realize this even without writing any other scripts. You just have to say that you are in an Angular app, this is here, this ng example, and then you can make here the ng model in the input tag and say all the data should be binded to text, and then you have here the paragraph and the braces, the double braces with text in it, and this is then also the same, like the example above, probably before. So, we can change now the data and everything is in, have the data binded. You don't have to make any thoughts about how you get the data into the model and back and so on. So, another big pro on Angular is our directives. I make two main types of directives with Jager. At first, the main or root directive is the map directive, and you can create layers with layer directives. Here with tile directive, WMS directive, and GeoJSON directive for vector data. Also, a spatial light directive, I've created, but spatial light is only available on native platforms or on Cordova platforms, not in your browser. So, this is not part of the normal Jager. It's a plug-in for Jager. So, here's an example for the root directive, the map directive. The process here, we can now use this data binding. So, we just set the center with latitude and longitude, and we can also bind it to the input fields with ngModel. And so, this is what the data binding is about. We are now able to change the map and the values here in the input field changes, but we can also change the values. But I have to go a little bit more here, closer. So, you can also change here the values and the map changes. So, this is the two-way data binding. So, then we go to the layer directives. At first, all layer directives have the attributes attribution, name, opacity and display. The attribution, like we know on leaflet, on the button right, we have the text. The name is just for internal proposes, so you have it a little bit easier to identify your layer. The opacity is, of course, the opacity of the layer, and this plays a boolean value, so you can add all your layers, but can choose if you want to display it on the map or not. So, here is an example of the tile or WMS layer directive. Here is the example itself from the Yaga tile layer. So, here I use the quotes, the double quotes and then the single quotes. This is because normally Angular expects variables, but I just put in here text, so this is why I use these maybe quite strange looking way. We have the attribute URL, this is the layer URL, the normal, tile layer URL, and we have min zoom, max zoom, and on the WMS layer, we have the layers attribute, because we have to choose which layer we want to use. Again, it's all two-way data binded, so we can also change the attribution two-way data binded and so on, or even the URL, but I think in most cases it don't make so much sense, but it's possible. Then we have the vector data layer, the geojson layer. There we have quite the same, but we don't use the URL, of course, but the data, we can bind there a simple geojson or objects directly in JavaScript, it isn't a string. Additionally, we also can use the attribute style, so we are able to style also the two-way data binded, the style of the layer, so it's quite easy to change the color of a line string or something like that, and we have also the min and max zoom possibilities. One other pro about this concept is that you are able to write your data, your mapping application in a descriptive way. In this example, you don't really have to write some JavaScript, you just declare that you want to have a map with two tie layers and a geojson layer, and when we take a look at the example, and you see here it is, you can check it on the presentation itself, I have loaded up on GitHub, there's no additional JavaScript that we have to use explicitly the Jager frame itself. So then I make here a little demo, I combine the Jager directives with a layer tree built with Angular and Bootstrap. So when we take a look at this example, this is quite the same from before, but here we have also the possibility to change the opacity. So all this is two-way data binded, we don't have to care much about how it works, we just have to say here is an input field with the opacity and everything works quite easy. So there's a few words about the roadmap. I take a look at Angular 2, I like a lot of these features. So maybe I wanted to rewrite it directly to Angular 2. There's also a implementation of Ionic 2, but Angular 2 is also in beta status, not full stable at the moment. I don't know if I should work directly with Angular 2, but I don't know. And another thing is I write my code in TypeScript and Angular 2 uses excessively decorators. I like the decorators too, so I was thinking about to work also more with decorators. But at the moment Angular is not API fix. I don't have a lot of changes in the last year on the API, but I don't want you to rely on this API that it wouldn't change anymore. Another thing is server components. I like Node.js a lot, so maybe I will develop also a few server components. And I take a look at the 10-gram layer with vector tiles, and this is one of the layers I wanted to implement soon into this concept of JARGA. So you can get my slides on my GitHub account. You have time to write it down at the last slide. I write it down there too. Here are a few links. I think it's the best way to get it from the slide itself on GitHub. We make a forum. You are able to ask questions or contact me or something like that. Here are additional links about the technologies I used in this project. And last but not least, when you are here next to Bonn, I make the last weeks a demo application together with my friends and the other maintainers. So if you want, you can go and hike near Bonn at the Warnaheide and try out JARGA with a hiking app. It's the area around the airport. So maybe if you have to wait for your airplane, you can take a little hike around to the area there. So thank you for your attention. Do we have any questions? So any questions? Any questions about Warnaheide? Okay, just one short one. First, you said you were still thinking if you should switch to Angular 2. Definitely yes. I think it's definitely a good idea. And just my question, why did you decide to work with Angular? Maybe I missed the first few minutes of the presentation. But why not, for example, react or something else? I like Angular a lot because of its way of two-way data mining. And I worked in earlier projects a little bit with Angular. So I started developing this project also with Angular. Another question? It was more a personal decision on this point. Have you done something similar in that area, in the topic, with two-way data mining with AngularJS? I'm nothing special, but I just worked a little bit with AngularJS. And then in the meantime, Angular 2 came up. So now I'm also starting and getting into it. But definitely, like, I still need support, like, also to combine with open layers always and things like this. But yeah, I think it's also much better and much in advance than AngularJS and definitely it's the future. Anyone else? How many people understood what it's all about? Because I find it very hard in the JavaScript area with all the frameworks and the concepts of this. And I began myself developing with JavaScript years ago and it was a different time. So something very special. Okay. One question. Okay. But as I understand, it's completely clued into leaflet. So you can't change the map engine. I make an abstraction layer, but it wasn't so easy as I thought at the beginning. You are more or less able at the moment to change it. But at first, I thought I will make this work also. But it wasn't so easy. But I think if you take a look at the source code, maybe you are also able to write it as a provider for open layers. I designed it that you are able to change it. But I don't know at the moment if I will really develop also a version for open layers. But at the beginning, I wanted to. Okay. But at least I got that answer that you write as a provider. So it's possible to add another provider. Yes. It's possible on the Java side itself. I have multiple stages. I have an abstraction layer. And then I realized that with leaflet. And after that, I write an Angular driver. And you are able at the moment to change your leaflet driver and take another driver for Angular. So it's prepared. But it isn't realized. Okay. Thank you. Couple questions. The first one is easy. Does this depend on Ionic or only on Angular? It depends only on Angular. You don't have to use Ionic. Second, there exists a leaflet Angular directive already developed. I don't know if you know it or you've tested it. And why you decided not to use it. Yes. Good question. I started developing with this framework. And there wasn't any leaflet directive, leaflet directive. I think it's the name of this project. But I don't like so much the way of developing with the leaflet directive. Because you only have one directive and you can't write your code, for example, like my one in a descriptive way. I think it's much more granular to write it with additional directives. So you have a lot of directives that you can use to provide your application. And not only the one with all the attributes on it. I think it's a little bit easier. I agree. I'm using the leaflet directive. I'm probably switching to yours. But the naming probably is a bit distracting. It's not easy to relate Java to a leaflet directive. But good work. Thank you. One question for me. What does Java mean? Another? Yeah, application. Most web applications need other GUI elements like forms or attribute tables or these kind of things. Are you planning to incorporate it also? No, because this is the part of Ionic and so on. They implement all the UI elements. I don't want to write this again and reinvent the wheel. I just wanted to use other libraries for this. So you can use a Yager with Angular. And if you want to have UI elements or something like that, you can use Ionic. But you don't have to use Ionic. It's not depending on Ionic like I already said. But if you do a feature info request, you get your feature back, I would think that this would lend itself to the two-wire binding that you were talking about and you would bind it to a GUI element from your framework. Yes, that's right. But I'm just the map part or I built just the map part. And if you want to change something like I make it in these slides earlier, then you are able to use other UI elements that are integrated into Angular. And you don't have to make your own, I think. You can have a real big possibility to use other elements. They are a lot. You have only to deal with responses and then put it somewhere into the interface. Yes. The main goal of this project is to weigh data by the data of the map. What you do with the data, you can choose yourself with every UI element you will find on the Angular side. One last question. No more? Okay. Thank you very much. Have a nice day.
This talk is about the Angular components of the early open source project Yaga. Angular serves an elegant and modern way to structure HTML-single-page-applications with its MV* pattern. Directives are one of the most powerful tools in Angular. Yaga provides directives for webmapping proposes, like the map itself, markers and different kind of layers. All directives are ready to use with two way data-binding. The main goal of Yaga is to harmonize it with Ionic. Ionic combine the power of Angular with the power of Cordova, a framework to create hybrid mobile Apps from HTML sources for all common mobile smart-devices. Additionally Ionic adds a UI that is close to the native look and feel of the mobile devices. With this stack you are able to create a GIS application for Android, iOS and Windows at once. In my talk I want to create a sample application and present the pros of Angulars two way data-binding and Ionics mobile UX design for mobile GIS applications with Yaga. Arne Schubert (Wheregroup GmbH & Co. KG)
10.5446/20361 (DOI)
Okay, so we'll move over to our third topic, not exactly in education, but different. But I think it's a topic that everyone knows about, few people know how to deal with it, and I think no one mentions it when they present their results to the people who pay them. So Sven Christ is going to speak about uncertainty in data. Okay, good afternoon everyone. So my name is Sven. I'm from Salomars University doing my masters. Firstly, I'd like to thank my supervisor, Mrs. Manj, and as well as my university, and lastly the South African National Research Foundation. So Kogito Ergo-Sum. I think therefore I am. It's a statement most of us know, but however, do we know the full statement? To be Kogito Ergo-Sum. I doubt therefore I think therefore I am. Basically what this is saying, within doubt we actually find what we know. We can only know how much we know and how well we know it when we start doubting what we know. So how does this apply to spatial data? Well basically we all use spatial data in some form or other. But how much do we know about this data? How much do we doubt our data enough to know what we can do with our data? And if we doubt our data enough, we should all know that inaccuracy is always a part of spatial data. We just have to know about it and learn how to deal with it. So what is uncertainty? I'm going to sometimes use uncertainty and data quality slightly interchangeably. But Mek Itshin says that it's when inaccuracy is known it is error. When it is not known it becomes uncertainty. However, long ago also in 2005 defined it as the difference between a dataset and the phenomena that it represents. Normally Shihwe and Kinklid stated that it's uncertainty is really a fuzzy concept. We haven't actually, we're still a bit uncertain about what we mean when we say uncertainty. So carrying on. Normally how is data quality measured? Normally we have our statistics such as Kappa, confusion matrix, RMSE or mean average error. Basically these are statistics, they're global statistics for a dataset when you produce something, you produce your statistical quality assessment, you put it into an accuracy report and no one ever reads it. Let's be honest. So a little bit has been done about this. So how much of these statistics do users understand? Especially those users we've heard in the previous chat. So Dakhra found in his study that experience plays a big role in how uncertainty is perceived, more experienced users might be more cognizant and more shall I say nervous about using bad data. And also providers often don't really provide that good analysis of their data because sometimes they just don't want you to know how bad a dataset is. And finally Tagmeyer in a similar study also found that 25% of his study respondents paid no attention to statistics at all. And some companies just decided hey if a supervisor looks over the data, looks good, must be good. Let's go ahead. So my study was done with South African users but it was a reasonably small study group, only 63 participants. So what I found was nearly everyone is cognizant of uncertainty and in data and the datasets aren't perfect. However, only 60% of these users and sometimes producers of datasets as well look at quality reports. Then only 36% of those that produce value added data, so they take a piece of dataset, they add extra value to it and then they pass it on, look at the quality of the data. So if they don't know how good their input data is, how can they really say much about the output data? So when I put the broad statement out in a questionnaire of how do you feel about 80% certainty in a dataset, just a general question, about 63% said it depends on the data, on the purpose of the data, we'll get to that again. About 29% said they feel comfortable using this data and another 18% said they completely reject the data, which also means they don't really understand data well enough. So to put it further, when I asked how is uncertainty managed in their normal workflow, about 59% said they can try to improve the dataset and communicate uncertainty. However, only 47% of these people actually looked at the accuracy reports. So how can you comment about accuracy if you don't know how good your dataset is that you are using, or how can you improve it if you don't know what you're improving? Another 18% said they look if it's fit for purpose, however, 36% of this 18% also don't look at an accuracy assessment. So how do you know if it's fit for your purpose if you don't know how good the dataset is? So do South African users and producers understand statistics? I don't really know, but I don't think so. So many do not ask for an accuracy assessment. So they assume the data is good enough, must be good enough, I got it from someone. And some even assume 90% plus accuracy on dataset just because they got it from someone else. An example was someone that said, cadastral data they assume. They said the cadastral data is 100% accurate, but it's not really because no matter how good your dataset is, you're going to miss something. Some projection is going to move something. No dataset is 100% accurate because we just cannot really model the world in a perfect way. So how should users use quality reports? Well quality reports are meant to inform and tell how good the dataset really is. So you can use it to check if it's fit for purpose. The ideal would be that each dataset is evaluated and as I said, check if it's fit for purpose. Quality reports should create a space where all involved are cognizant of what datasets can and cannot be useful and where these improvements are required. So only if you know how good your dataset is or how bad it is, can you really improve it. So then my next question was, can visualization aid in communicating data quality? Well visualization has been found to trump texts in appeal and communication power by previous researchers such as Bostrom, Ansel and Ferris. And more than 70% of those respondents that said they don't look at their quality report said visualization would actually help and they might actually look at this. Therefore visualization can aid in bringing the understanding of data quality to a whole new audience. So some existing visualization tools that exist is RViz, Bahaud and McEacheran. It was basically made for a creging process that was testing soil nitrogen content. So as you can see the first image, the blue represents areas of high certainty and the red represents areas that were measured as nitrogen containing but less certainty. The red over here is just the whole area measured and this red over here is just the area of high certainty. So that's one method and most users said that that is one of the best methods. It's easily understood. Other solutions also such as Agula, I must add that the previous solution is not available anymore. It's very outdated. This however is available. It's Agula part of PC Raster. A couple of problems I've come across with it is if you're using a Windows machine it's really hard to install with a lot of dependencies. Also the learning curve to using this is quite steep. However it provides very good statistics and it's a very powerful tool. Another one is Uncert Web which was supposed to be a web program. It was funded by the European Commission from February 2010 to Jan 2013. A lot of work was done about this however the project seemed to have fallen flat with the funding. This again shows that visualization and data quality are very important aspects. If the European Commission would fund it however it's another failed project. Two other tools to mention is one by Alberti in 2013. However that isn't available openly as well as by Fowler but as I said again that's not openly available to everyone. So problems with these solutions are they outdated, most of them aren't openly available, they're complex to install and not really user friendly. So I've developed a QGIS plugin to visualize uncertainty. It's only for continuous raster data at this point. What is it the purpose of it is? To bring a visual aspect to data quality, to easily transfer data quality and user said they would prefer a QGIS plugin or a plugin for ArcGIS according to Alberti study as this would easily integrate with the current workflow. So why QGIS? Well it's easy to use, no expert knowledge is needed to install it and it's freely available to everyone. So this is a bit of a framework for my tool should I say. You have your raster that you want to evaluate, you have a shapefile with points, the shapefile with the points can either contain reference values or just points where you want to test. So if you don't have reference values in your shapefile you'd have another reference raster. The statistics will get calculated, I'll chat again about the statistics. The shapefile will be created, loaded into QGIS and then you can choose either a color vision impaired style or a regular vision style. This is basically what the user interface looks like, up top you have your data set to be tested. On this side you can see if you tick the data is discrete and not continuous you'll get this little warning at the bottom saying that this tool won't do that. What it will do is give you a binary response saying yes it's correct at this point or no it's not. So that you can't use the statistics they'll be just rubbish basically. So carrying on you have your shapefile with points you can select use the shapefile for reference points and you get a little drop down to select your field with your reference data or if you don't click that you'll get your reference raster and then you'll get your browse to where you want to save the file and then you get these two boxes. Basically the same thing. The first one is just an overall view which is all the statistics just put together and visualized on Jinx breaks. The second one is a Z score which is a statistical deviation from the mean as well as the modified Z score which just takes the median and median absolute deviation to account for outliers in the data and then the ideal Z score is sort of the data set standard deviation taken however model to an ideal which has a deviation of zero. So basically the Z score and modified Z scores also sort of testing a data set against its own quality. So this is basically what you'll get out when you press OK in QGIS. I'll just take you through a couple of options. This is you'll get your statistics in your attribute table so you can open it you can investigate your actual difference and all your statistics that get calculated while calculating the visualization these all get put into the attribute table so you can investigate them. And this is just a test data set which I'm going to show you guys which I've run through the tool. So the mean absolute area is 2.46 meter this is a five meter resolution digital elevation model by the way. And the standard deviation is 2.29 meters the 90th percentile is 5.25 meters which means that 90 percent of the data will fall below that the difference will be below that. So that is a statistic not often mentioned which is interesting because you will get a statistic such as your RMS E of 3.36 or your MAE of 2.46 and you'll think that is good but as you'll see 5.25 is quite a jump from that. So finally the overall visualization which is when I put that through and the overall visualization showed the areas up top on the right corner that those are outliers you also have over here and over there but that is an area where most of it will be above 3.36 and some of them even above. When using the other options available the difference also showed a similar area. Interestingly the Z score when measured against itself only without accounting for outliers it showed the least areas with outliers however it also highlighted the same little cluster. All of these actually highlighted the same cluster as being problematic. So basically what this is doing is it's giving you the statistics you should have your statistics but this is almost trying to deal with a problem with statistics being that spatial data is spatial in nature and you can't really have one statistic for the overall dataset not representing the basically the data at every point. So what does the user get from UV view? Now a simple to install QGIS plugin no expert knowledge is used to install or run it. It's freely available to all once I upload this and a visual tool to understand data quality statistics and the visualization as well. So I've asked some users about this tool and how good it is and basically the response was overwhelmingly yes it does provide a better understanding of the statistics. The question of will a visualization degrade the quality of the day the perceived quality of the data was that mostly to GIS professionals know it won't to the layman yes it might but this basically goes together with what the Chara found in his study that producer said it might degrade the perceived value of the product. So finally can visualization help in management of uncertainty and communication thereof that was also marginally well a UGS. So some shortcomings of the tool is more statistical options are required better documentation on methods and the statistics is needed and there's some scale issues as well as that it needs an intern it can't do an internal validation of a data set it needs another secondary data set with reference points and more contrast between some of the categories. This is one of the shortcomings that I've addressed it's basically just an about tab which gives a whole lot of information about how the tool works and the statistics. And then finally to recap uncertainty is present in all data sets whether we like it or not statistics is used to communicate this uncertainty however it's not always very well understood. So basically we often don't doubt our data enough to know how good our data is. And then finally this tool is not complete it's a step in that direction and I'd say for future tools perhaps the cloud would be a better or a web based or cloud based solution would be better since GIS is heading a lot in the cloud direction. Thank you. Thank you Sven for this interesting presentation. Do we have any questions reactions remarks? Just for probably I didn't understand very well what you meant the user feedback when you said that using the plug-in the degraded the perceived value of that could you better what I mean is when you produce a data set and you have an overlay of the uncertainty and at these areas it might be better at these areas it might be worse the end user might look at this and while this data set doesn't look as good as I think it is so I don't like your data go do it over basically just they don't perceive your data as well as if you just gave a statistic and the statistic looks good or sounds good. Any other questions reactions? Hello when applying this shapefile with points is there a lower limit to how many points you could upload or I guess the uncertainty relies on how many points you have. Actually there's a lower limit of three which is really bad which is one of the limitations which I mentioned as the scale issue because the amount of points and the distribution between the points also affects the scale of which you can actually say anything about it so I was thinking of adding maybe something that checks the distance between the furthest distance and just giving another pop-up saying this is the best we can say from this basically about this data but I should probably apply a better limit to maybe per kilometer you need x amount of points or depending what quality or what resolution your data is at. I actually had a question linked to that how do you go from the points to your raster with uncertainty everywhere what do you use as a technique? The point from the point to having a polygon it's the Voronoi polygons which is basically taking every point the area that is closest to that point will be linked to that point which is also relates to some of the scale problems. Any other questions remarks? Thank you very much Sven.
A talk about data quality, how it is understood and if visualization can improve the understanding of data quality. A lot of focus has been put on data quality and methods of accuracy assessment. Most of these methods are however statistical. The focus here is on how users and producers view uncertainty and a view into what is the current reality especially relating to the statistics that are presented. A research based section deals with uncertainty perceptions specifically in South Africa but also related to international literature. A tool (QGIS plugin) for uncertainty visualization in continuous raster datasets is also shown. Finally there is a brief demonstration of how visualization can aid in showing the results of uncertainty in data that is put through a model. Thus giving a visual example of the power of visualization.
10.5446/20360 (DOI)
Hello, everyone. So I want to, my topic is about point clouds and how to show them in the browser with a project called Potry. And the presentation was prepared by me and Markus Schütz, who is the original author of this library, who is unfortunately not able to come to Phosphor G to Bonn. And about a few more words about me, I'm a geographer, a software developer and maintainer of a project called Pugeting. I founded a company, GeoRepublic, and I'm living in Germany in summer and the rest of the year in Japan. And for a long time I enjoy open source Phosphor G and OSM. And yeah, it happened that I have to work with point cloud data as a project. And yeah, so what do we do with this point cloud data? Maybe our point cloud data is not so common, so it's point cloud data recorded to maintain road infrastructure. So it typically looks like this. It's very high quality and high level of details along the road and there's just no data, where is no road. And you have very complex junctions. So sometimes it's very dense and sometimes there's nothing. So what you need to collect this data is such a car, it's quite expensive. Yeah, it's not something you buy so easy. But yeah, there are companies that use point cloud data and laser scanners and all these things and it's getting, yeah, point cloud data is getting more and more. Also new cars have tons of sensors and also scan the environment. So I think it's just a matter of time until you come across point cloud data if you didn't do that yet. And yeah, if you can't wait, there's also a way how you can create your own point cloud data. You can do that at work and maybe you have access to such a drone and can play around it while working. And so you can go to the next park and fly around a bit and take photos from the top and you load them to your computer, quite a lot of files and then there's not open source software that can turn this into point cloud data. And I did this one time and I forgot to adjust the lens of the camera and it was fish eye so everything became a bit round. But I found it very cool and it was fun to play with. And so the park in the end looked like this, just flying around for 30 minutes until the battery was off. And yeah, then I kept my computer running and it was running quite some time and it became very hot. I didn't have to work anymore for that time and it turned out that this little small park was already 45 million points and a 1.2 gigabyte last file. And so yeah, welcome to big data. So the question is now, if I want to show this, how can we show these not millions but billions of points in a browser? And that was the requirement for the project we did. And so I was looking around what exists. It was two or three years ago and I found a project called Pottery and it looked very interesting and it was named a WebGL point cloud viewer for large data sets and it was that time developed or it's still developed by Markus Schütz from Vienna and the number of contributors is growing since then. And so yeah, maybe we were the first, so G-Republic was the first company supporting this five financially and then Rapid Lasso did contributions for funding new features and more and more came so the library was actually growing and became an open source project. And yeah, I didn't know exactly when it started and the community started, yeah, the GitHub account started in February 2014 and there was then a lot of activity depending on funding and then there was a little break in the last month. So take a breath. I think we have more in mind and development will take like increase again. Yeah, so how does Pottery work? So there are a few things when you have such a lot of data to deal with. Usually you only can see a small part of the data. You don't have to show these 45 million or more points. You don't have to load everything. So only what you can see, from your point of view, is necessary to load. So it uses a multi-resolution octree and this allows you to not display everything that is unnecessary and only load what you need. So you try to minimize the number of points. So it has low level nodes that contain the low resolution models of large areas. So when you zoom out, you only can see this and when you zoom in, your resolution increases and the coverage, the area, becomes less when you zoom in. And this is pretty much what we have also with map tiling. So yeah, it's a very common way to deal with such amount of data. And each octree node is stored in a separate file and so the server is only necessary as a file store. And in our case, for example, we store the files on S3. And so you have kind of unlimited storage. And there's no server that application required to load this. The potree requires a certain data format and for that, we need a converter that was made for this named potree converter. And initially it was not the case, but then we decided to use 3JS as a base for the rendering engine and so you have all the capabilities that you would also get when you use 3JS. So if you want to extend this, this functionality, show a video inside or so, you can do that. And there was also added client-side last and the compressed version of last support from a project called Plasio, which is also very cool. And yeah, we needed some tools, especially for measurement in the point cloud, which was added. Yeah, of course, it's open source and it's a very business-friendly license, VSD. So about the data format that is currently supported, you can take as input last compressed version and PLY files. And as an output, after you run the converter, you get a cloud.js file, which is a metadata file that contains information about bounding box, spacing, storage format, and so on. And you get many octree nodes and they contain information like position, color, intensity, or classification information. And there's then an additional file to store the octree hierarchy. So the features, one time listed, and I will show some interesting ones later in detail. So first, you can render billions of points in the browser. That was the main goal. You can change the point size. So if you have more sparse point cloud, you can fill the holes. And there's something called IDome lighting that makes your point cloud look shiny and illuminates it. And you can render RGB, elevation, intensity, or just any attribute that you have. There are various rendering modes. So if you have powerful hardware, you can do more things like interpolation or splats. And if your computer is not so strong, you can a little bit decrease this computation-intensive interpolation rendering modes. And you can measure distance and areas, or you can get height profiles. And you can clip volumes. And a lot of these functionality was added on request funded by projects that needed it. So how do you show a point cloud more nicely than just the points? That was added later. And what makes it very nice is this interpolation. It increases the readability of details like fonts become easier to read. And you avoid overlapping of points by using some nearest neighbor interpolation or overlapping points together. The other thing to make it look more nice is called IDome lighting. So it's some illumination. And at the same time, you add some outlines. And so it feels more like with more 3D and more depth. And so it looks quite nice then. And this is especially useful if you render elevation or classification of your data. Something else that was added, sometimes it's very difficult to navigate in your, this area in 3D. So you have some points of interest. And you can add annotations numbers, like small little POIs or markers. And when you click on them, it will zoom to that point. And you can predefine how you look at this point or at this object. And something that we needed especially was getting height profiles. And initially, we kept, we didn't calculate it directly in a browser, but the data is already there. So you don't have, it's just by, you draw a line that can also consist of multiple segments. And yeah, it will immediately draw you a height profile like you see here on the right side. And it will show it even if you line as a zigzag, it will show it in flat in 2D. And because I can't, in the demo, I later can't find such a nice path. So here's a nice example. I'm not sure how well it works with the Internet and the mouse. And then there's something, the measurement that was important for us. The demo you see here, there are some example panels, but of course this is a library. So you can use the functions and methods to build this yourself. But you can get an idea what you could do. So you can display coordinates at a certain point. You can show distances like the middle picture. You can remove, like, if you measure something, then you remove it again. And you can also show, that's the button at the end. You can show the 2D profile for this line. So I will try to show this demo later. And in case it doesn't work, I will show you the pictures quickly. So you see on the left side, it was initially for debugging, but it's also a good point to start with. So you can make a lot of adjustments to your point cloud. You can change the point size. You can show the maximum amount of points you want to render. Because if your computer is not very powerful, this uses a lot of computation power. And so it's good to reduce the maximum amount of points to display. You can do things like change the point sizing, do it like automatic or a fixed size. And the shape of the points and all the tools are listed there. Or if you zoom in with the coloring, it actually looks quite beautiful. There are actually many, many examples and data that was allowed to, so Markus was able to get this data and show it in his demos. And all the demos are also, the source code is available on GitHub. There are some limitations. Yeah, there are probably more limitations than three, but I wanted to mention these three. So one is the browser security policies. So even if you could run it from a file, yeah, so you have to have a web server to load this. It's just HTML and JavaScript, but you still need a web server. And of course, you need webGL-capable browsers and devices. And this is becoming, so the issue was three years ago, this was a bigger issue, but in the meanwhile, this is not so big problem anymore. And also Internet Explorer now is eventually, I think it's working, but I don't know exactly. And yeah, you can only use it for viewing and analyzing. You cannot manipulate the data, of course, because it's stored in the file system. So who is using Poetry? Yeah, there are quite a lot of companies I've heard are using it. Maybe there are more, probably there are more, and we just don't know about it. And yeah, I think it's quite, it's actually a well-used project and also many, many functionality was funded. So there seems to be quite some demand. There are like friends projects, very interesting ones. And some I will just quickly mention, and later there will be, for example, a talk about the iTunes project, which is a 3D webGL geospatial visualization project. And yeah, in the afternoon, there will be another session. And you can hear more about Antwine that does indexing of point clouds. And it's something that might replace the Poetry converter eventually. So it does it in a very sophisticated way. And then there's Greyhound that provides data streaming and serving environment. And Poodle, Piedle, you will also hear more probably in the afternoon. And like mentioned before, so 3D tiles is something we are also looking at. I'm not sure exactly what the state of this is. Yeah, I think we are missing some standard. And there's the PG Point Cloud project. That was maybe one of the first ones. But I haven't looked, I haven't tried it much. And I think Point Cloud is a topic that many people care about these days. Like I said before, I think we need a standard. The standard is what makes open source stronger so when we can easily exchange libraries or when we can easily communicate with other projects. So I think this is important to think about and define them. And maybe we can't wait until OGC finish their official standard. And yeah, let's quickly show, try the demo. So when you, this is a data set that has 150 million points that I showed before. On the left side you can turn on and off this, the configuration menu. And for example, when I click on Point 2, it moves to this point and it loads. Maybe the internet speed is not so high. Let's not make the points adaptive. Maybe it's also the graphics card. That's the screenshot from before. You can decrease the points or you can increase them. So if this computer is powerful, in this case we set the maximum to 5 million points and the default one was 1.5 million points. You can change opacity. You can add, this is from 3JS, you can add the sky. Like as a background. And for measurement, for example, it's, okay. That's the problem of demos. So this is the height profile. So you can draw a line. Let's try to hit the column. Not so good. And now we can show the profile here on the bottom. Which is not so good like the one in the example. You can show at which point you are. So this is for, yeah, this is not an application. This is like the standard interface that shows what Potry can do. And so after some, a few months of break of development, now it's time, so there's work on making the release candidate of version 1.4 was released and it should be the final release available by end of the year. And it's one of the new features, it will do faster conversion of the data. So when you convert your data, and the file format has been improved, like there were too many files. And so you have to find the right balance between the amount of files and how you organize it in folders. And in this case, it uses 50% less space now and has fewer files and it should improve the performance. And it should also contain support for Greyhound and Antwine, which was implemented by Howard Butler and a few, I don't know exactly who was involved. And yeah, later, other ideas is like projecting maps onto point clouds in real time or load in display point cloud formats without any conversion. Yeah, this always needs funding, so if somebody needs that, Potry got that far because of the support of many companies. And so I think there's a lot more possible. And if you are one of those who need something, then yeah, contact me or contact Marcus. Yeah, you can find the project on www.potry.org. And we are very welcome to hear feedback, things that work and things that don't work. Yeah, thank you very much. And if you have some questions. Thank you very much, Daniel. So question sign. I'm sure you have more questions than you can. Thank you. And is it possible to export this 2D profile? And if into which format then? I think you can do this. Yeah, it's rendered with WebGL 3.js. But I think there's a way to export this, but I don't know how it works maybe. I think it's possible. Because you maybe briefly explained the tool chain. So assuming I have point clouds, which were, so I'm from the German railways, we have actual trains which do leader scanning. And we have point clouds. As you might want to make this kind of prototype or this kind of demo to pitch to our management. But how would I proceed? How would I, which tool chain would I use? You have your last files. And those. I assume so. And then you use Potry Converter to convert in the format that is like a tile format. And you have to upload to S3. And if new data arrives, you don't have to do everything from scratch. But I think there's going to be more sophisticated versions. I think the focus of Potry is not so much the converter. So with Antwine and Greyhound as a streaming service, that this is maybe something that scales for larger areas. So in our case, we always have certain junctions and certain areas. And they are connected. And we get the data once. And then we run it once through this converter. You have this wonderful measurements. Is it possible to capture the data and export them to other programs? So there's nothing built in like this now. But it should be possible to do that. You can also change the width of this line, of the profile line. In which you want to have the points itself or just an image or what kind of format should it be? So you don't have all points of your original data? No, I was thinking about when you have the measuring, you were showing how you could measure and digitize some sort of digitizing. But is it possible to capture those information back into another program? It's not from the profile. But I think the Potry itself is so you don't keep all the points. You only show, you only load what you need. So the data you have on your browser, it should be possible to add this functionality to export it. But I think maybe image makes more sense. Like the question before. Another question? Over there? A nice presentation. Just a comment. Potry also has the possibility to create a little download portal for the original data. So if you, in addition to the last file input, you give it a projection, it will create an open street map from which you can download the LAS tiles to just create a very simple download portal. Okay. Did you find this? Okay. So we don't use this. I didn't know that exists. Okay. So there seems to be something done in this way. Thank you for the hint. One last question maybe. Thank you, Danielle, for this nice presentation.
Potree is an open source project that implements point cloud rendering capability in a browser. It is a WebGL based point cloud viewer for large datasets. Thanks to WebGL, it runs in all major browsers without plugins. Over the past years Potree has evolved from a small library to an active open source project with an active community, several companies funding development and an increasing user base. This presentation will give an overview over the current state of point cloud rendering with Potree, about the difficulties and challenges. Pointcloud data is expected to play an increasing role in the next years with falling prices for previously very expensive hardware such as laser scanners, the development of autonomous vehicles and the popularity of drones. Powerful hardware and WebGL will open up a wide range of innovative browser-based web services in the near future.
10.5446/20356 (DOI)
Good morning, everybody. My name is Jaachim Ciepicki and originally this presentation was announced by Louis de Souza, who is here with me as well, a part member of the project steering committee and development team. Also, we have here George de Jezou, or Jesus, if you want, who is also a member of the development team. And I'm missing Jonas here in this room. So we have four people here at this conference. And I would like to present you news in the PiWPS project. Probably you have some of you might already heard about it. What is PiWPS? It's a implementation of OGC web processing service standard on the server side using suddenly Python programing language. And the project is quite major already, or we would like to call it major. And it supports all the tools which are available in Python for the just special processes or operations. The home page we managed to register recently is PiWPS org. What is it? Not what probably you have some expectations. Those are the expectations we are not able to fulfill. It's not complicated. There is no client for WPS. If you're looking for a client, you have to ping to QGS, for example. It has no graphical user interface. We were inspired by Maps Server, by the way. There are no processes. There are some for testing purposes. But there are basically no processes. So if you would be asking, can I do it with PiWPS buffer? Yes, you can. But if the question sounds like, is there a buffer? No, it isn't. Briefly introduction to OGC web processing service. You might probably heard already and you will probably hear at least three times during this session. It's a standard for remote geospatial processing. And it's somehow coupled with other OWS open web services produced by OGC, like WFS or WCS. And there are three basic requests. The first one you already know, you all know, is get capabilities, which gives you basically the metadata back. Then there is a so-called describe process request, which gives you detailed metadata for all of the inputs and outputs for the particular process. And of course, so that the client can prepare the request, the last request, which is called execute. That means basically that the server will execute what is supposed to execute the process, the data. And we recognize, according to the standard, three basic input and output classes. So called liter data, which is usually a text string, a number, some complex data, which is usually some geospatial data, vector or raster data, and bounding box data. By the way, I'm referring to the version 100 of the standard. There is currently a new version of the standard, 200 in place. There can be little changes. But generally, again, bounding box data, they're not much used widely, but they are still there. And they describe somehow the geospatial extent of some operation. This is how it works in practice somehow. So we have this server. There's no mouse. OK. We have this server. And at the server, they are, in our case, N processes deployed. And the server responds, communicates with the outside world, with the internet, using the processing service, with get capabilities described, process and execute. And the data, when they are being processed, they are basically transferring the data into informations. What is it that, in a practical way, is a communication bridge. It fetches the data referenced in execute request to make sure that the data are OK, that they are not too big, that the format corresponds and so on. It creates kind of container for the process instance. We have implemented some process management, like for communication, reporting, how the process is going, and logging. We need some data storage in order to store the final output, which are using, again, it can be a raster file, vector file, can be pretty big. We have to store it somewhere, so we have to implement some storage mechanism. And there is some client notification and status reporting to the client, of course, how the process is going. The process, I mentioned, we would like to call it a project, it's started 10 years ago and it was first presented at FOSS4G 2006. The first one in Lozan, originally it was funded as a scholarship of German Foundation for Environment. Is there somebody from DBU? No? Thank you, one more time. And yeah, we released the first version in 2006, then we released several versions during the next three or four years. And the project was slowly developing. There was a process class introduced, we played with the plugins too, and of course stability, we tried to improve it all the time. The last stable version or branch number three is here with us still today, so last over the eight years. And yeah, during the year 2011, George was the one who has written the WPS Qoob book, which is published thanks to the Netmar project. It introduces basically the documentation, human-rele<|zh|><|translate|> documentation to the standard, presents the PWS implementation, and also it introduces the WPS software. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah. And yeah.
PyWPS is an open source, light-weight, Python based, implementation of the OGC Web Processing Service (WPS) standard. It provides users with a relatively seamless environment where to code geo-spatial functions and models that are readily exposed to the Internet through the WWW. Initially started in 2006, PyWPS has been completely re-written for PyWPS-4 taking advantage of the state-of-the-art Python infrastructure in order to provide new and useful features. The current version 3 implements the WPS 1.0 standard almost entirely. The recent publication of WPS version 2.0 - which brings forth important new functionalities - is also prompting this re-structuring of the code for PyWPS-4. PyWPS offers a straightforward WPS development framework with the increasingly popular Python language. Python offers easy access to a vast array of code libraries that can be easily used in the processes, in particular those for geo-spatial data manipulation, e.g. GRASS, GDAL/OGR, Fiona, Shapely, etc., but also to statistics packages (e.g. rpy2 for R statistics) and data analysis tools (e.g. pandas). PyWPS offers storage mechanisms for process inputs and outputs and spawns processes to the background for asynchronous execution requests. Future goals of the project include automatic publication of geo-spatial results through a WFS/WCS server such as MapServer and Geoserver and support for Transactional WPS with a process scheduler. The authors present general project news like to on going OSGeo incubation and the new Project Steering Committee as well as the current state of PyWPS, and show demonstrations how these services are currently being provided.
10.5446/20354 (DOI)
David Gonzalez. I'm CTO and founder of Visuality. Visuality is a digital agency. We design and develop web and mobile applications, data visualizations, APIs, interactive maps, and all sorts of digital products focused on environment, social development, open data, and transparency. Visuality is a founding partner of Global Forest Watch, which is the project I'm going to present today. We were responsible for the design and implementation of the main website and most of the tools in the platform, also the implementation of the API. This platform has been helping preserve forests for two years now. I'm not sure if any of you knew it before. I'm going to try to give you a high-level overview, not too technical, but I'm happy to answer any technical questions afterwards. First of all, why Global Forest Watch? I think this phrase says it all. 50 hectares of forests are lost every minute of every hour of every day of every year. In the eight days of this conference, an area equivalent to 40 times the city of Bonn will have been lost in forests. Forests are fundamental for millions of people's lives. They are fundamental for climate change. They affect biodiversity, water, and air quality, and they are pressured by economic and financial factors. For Global Forest Watch, when we started thinking about creating this platform, there was a huge data gap. Data was not timely. Reports about forests came out every five or 10 years. They were not comparable because they were made with different methodologies. They were dispersed. They were inaccessible, sometimes not accurate. Many times they were unreliable because governments don't want to look bad and kind of change the results. They were expensive and they were too technical. You needed GIS expertise. This product is aimed to bridge that data gap. With GFW, we aim to bring together science, policy, and citizenship using technology and design. What's this data gap I'm talking about? I'd like to summarize it in these three phrases. Data must be available. We've made a big progress in that. We have terabytes of data. Every day, new data is generated from mobile phones, satellites, from everywhere. But that data must be usable. It must be in a format, in a way that can be accessed and used. Most important of all, data must be used. If data is not used to create impact or change, then this data availability doesn't mean anything. We try to, our theory of change around how to make data, how to deliver data that drives to change is summarizing this cycle where available data can be combined to create insights. Those insights can move to action. That action creates dialogue and that dialogue generates the need for new data or the drive to create new data. To do this, one of the things that needs to happen, we think, is we need to do the jump from matters of fact to matters of concern. Because people actually use data when that data is around a concern. Facts and truths are not as important as concerns. I'm going to show you how Global Forest Watch aims to go from matters of fact to matters of concern. First, I'm going to tell you a bit of Global Forest Watch, what it is. It's a set of tools that includes an API, data visualization products, and forest relevant information delivered by a public API. It hosts over 165 data sets. This figure grows every month. A lot of them are global and near real time. They come from right now 150 different data providers. The vast majority of them are open data, except for a few notable exceptions. It's action oriented. It's not another data portal. It's not a data dump. We try to empower people everywhere and all sorts of people with the information they need to actually create impact. The information and the tools to actually manage forest landscapes and forest resources. It brings together science and policy through technology. It uses technology to provide precise information that can guide policy and decision making. It's not data per se. It's not raw data. It's data that can be analyzed, that can be combined, that can be used. It's open. All our tools are open, including open source code basis for all the tools and creating common license data. It's free and simple to use. It's collaborative. It's open to contributions. The code is open to pull requests and it's open to receive data from other actors. Users can also contribute by sharing information, by creating stories, by validating forest data from the ground, by discussing in the blog, discussing in the forums. Most of all, it's a partnership. This is the current list of Global Forest Watch partners. When we started, we were 10 partners. We are 19 now, two years after. I think this is one of the big factors of success of this tool. The Global Forest Watch has been moved forward by the World Resources Institute. They are the big promoters behind it. We like to say this is a forest transparency revolution. Why? Because for the first time, we have a tool where anyone in the world with an internet connection can see when and where the forestation is happening from their computer, their tablet, or their mobile phone, and they can do this with days of difference between when the forestation happened and when they are looking at it in their phone. This is automated. We can go from detection from a satellite, processing analysis and sharing on the web in a matter of hours. For some tools like fires, for some data sets like fires, this is less than three or four hours. If we come more complex ones like forest alerts, it can be one or two days. It is action-oriented. It takes data where people can see it. We don't aim for people to come to our platform. They can create alerts that are routed to emails by SMS or even create a webhook that's going to trigger an action whenever a new deforestation alert is detected in the area of interest they chose. How do we do this? How do we go for action? By curating data and making it accessible. It curates and helps create all sorts of data from deforestation and reforestation, protected areas, primary forest, forest use, concessions, digital imagery, like digital globe, earth dust, statistics and user-sourced information. Here you can see what's available in the map. This is roughly the amount of data you have. As you see, it's not just forest change data. It's a lot about contextual data that relates to forests, including people rights, resource rights, etc. It enables combinations, sharing and analysis. It makes it easy to visualize, to download, to combine or to share information. Here what you are seeing is an animation of GLAAD alerts. This is global. They're not GLAAD. They're not nice alerts, but this is deforestation based on Landsat data. This is the process a user would use to subscribe to alerts in a protected area south of Borneo. As you see, the whole process may be like 30 seconds. This person would receive an alert whenever there's new deforestation detected in a protected area within hours, within days of that happening. We also create new data and insights. We use the existing data to combine it and to create trends, summary products and tools that people can use to understand the status of forest in a broader way. In this case, what you're seeing is the climate leg of Global Forest Watch. This is how a user would customize comparison between two areas in Brazil in terms of forest-related climate data. I think these reports can be customized and they are interactive and they can be shared or downloaded. Global Forest Watch is founded on open data. It wouldn't exist without open data. We use open data heavily to know when deforestation happens, but also to know why deforestation happens and furthermore to know what that means. But all deforestation is created equal. It depends. The first one, monitor when and where deforestation is happening by detecting forest change globally in near real time. Sorry. In this animation, you can see the product, UMD product, which shows deforestation alerts in the last roughly 14 years. This is something that was created by the University of Maryland and this is based on 100% on free available data from Landsat. Second, why? Why is deforestation happening? We can compare that data to contextual layers. For example, in this case, you will see an area in the center of Brazil heavily deforested, but for some reason here there's no deforestation. If you open the land rights layer, you will see that it matches exactly an area that's being managed by local people. You can understand why deforestation happens or what strategies or policies can help deforestation from happening. In this other example, we are again in Borneo. You are seeing heavily deforested area. And when you open the old palm concessions, it becomes clear what's causing that deforestation. In this case, it's deforestation for oil palm fields. The third is knowing what the significance of that deforestation is. What you see here, the darker green is what we call an intact forest landscape. In the year 2000, all dark and light green was intact. No human activity had occurred there. As you see, we detected these lines, which are obviously logging roads. And wherever you see a logging road, the light green is a reduction in extent, is degraded forest. That's not intact anymore. Here you have the same example somewhere else, but this is earlier. In this case, we are not looking at yearly alerts. We are looking at GLAD alerts. These alerts may have happened when this was taken. It was made, but this may have happened a week ago. As you see, degraded forest is heavily deforested, and new roads are showing up here. If we let this stay like that, it will be degraded forest in very little time. So the important thing here is if we can detect deforestation at this level early, we can avoid heavy deforestation from happening. We can't act. Here you can see how these logging roads are showing up just weeks ago. When you open the protected areas layer, you can see that this is actually a protected area. No logging roads should be happening here. This is in a Cordillera's National Park in Peru. Another example, what you see here is a tiger conservation corridor. You can see that the corridor is about to be cut into by deforestation. This is how we look at the importance of deforestation. It's not just that trees are being cut. It's where they are being cut, how they affect people, how they affect biodiversity, how they affect climate. Finally, very important, go for as much as I said is open source. The code is open source. The APIs are open source. They are based in a microservice approach, which means new services can be created in any language as long as they adhere to certain standards. So anyone is invited to create new modules for the API that respond to different uses. We welcome developers to use the APIs to create derived products that they can use. Developers can extend any of the tools. And API users can query and download data programmatically. You can create subscriptions that work with your application when deforestation is detected. Of course, we are happy to hear any ideas of how this product could be extended. What's there for the future? What we are thinking about? Of course, we are thinking about even more data, more timely data going towards real-time data, more global data, more local data. This is something that is going to happen, more crowdsourcing tools for data validation, for ground validation. We are exploring artificial intelligence solutions to flag relevant changes. For example, when you saw those roads showing up, it wouldn't be hard to detect logging roads by trying to find linear features in those alerts. And one of the big steps, we want to go from early detection to prediction. And that can be done with deep learning, knowing what patterns have happened in the past, what drivers affect deforestation, what areas are more at risk. We can't go from early detection to prediction. If we can't take that step, we will avoid even those logging roads from showing up. I have a few examples of how Global First Watch has been used by governments, by NGOs, and by civil society. I don't have time to talk about them, but I'm happy to tell you about them if you want. Thank you very much. Thank you very much, David, for this great presentation. Do we have questions or comments from the audience? Yeah, hello. Thank you. Are there any WMS services or something that can be consumed and are ready from your side? Yep. I mean, actually, we consume other data sources, and we create our own. But yes, there is rising indigenous territories in Brazil. For example, it's WMS, and it can be consumed. And we have a bunch of different layers, especially in the commodities and the fires subsides of Global First Watch. There are several of them. Any more questions? Yes? Will you be adding central data or is it already there? It's coming. It's coming. As we speak, we are working on it. Anyone else? We have plenty of time now because we're only going to have two presentations, so don't keep them back. Thank you for excellent presentation. So you base mainly on Landsat, or you said 15 sources, so you use Landsat modis? So it depends. As I showed you, there are several forest change layers. So one of them, for example, is the UMD 12 year 30 by 30 resolution meter. That's Landsat, but there is another layer called Forma, which was based on modis. And there are derivatives of those that have been programmatically processed to give new products. And there's even layers which are done manually. In Brazil, the Avalia Saudo des Matamento in Brazil is done by looking at imagery that doesn't have to be Landsat or modis. Can be sentinel and tagging it manually. So there's all different kinds of sources. Fires, for example, is the NASA product, which I believe is Landsat with a bit of modis as well, and I'm not sure. So now we are experimenting with sentinel data. So generally, you do not have your database, but you're taking data from? Yeah, it's usually like that. We either take open source products or talk to scientists who are doing that and take those products and put them on the page. Or sometimes what we do is harmonizing different datasets. For example, you have protected areas. Well, protected areas is not the case because there's a harmonized dataset, but you have mining concessions for several different countries with different attributes and different formats, different projections. So they are harmonized and merged into a global dataset. That's part of the... Thank you. Yep. Thanks for the presentation. I would like to know if it would be possible to use the data for mapping an open street map if this is okay in the license? Absolutely. Absolutely. Every piece of data in Global Force Watch has a license and some of them are not owned by Global Force Watch. In fact, most of them are not owned by Global Force Watch. So we should look at the licenses, but I'm safe saying that 90% of the datasets there are Creative Commons. They can be used anywhere. They can be derived and they can be used. And actually, there is a project called Logging Roads that is also convened by WRI, but World Resources Institute, that's precisely about tagging roads from satellite imagery, tagging them into open street maps. Okay. The second question would be if the data is actually downloadable. Yep. The data is downloadable in most of the cases. It can even be queried and download, for example, just for a geospatial, for a polygon of your choice or for a year of your choice. The API allows for that. And then, again, the vast majority of the datasets can be downloaded. And we encourage you for people to download them and use them, of course. So there was only the 20 minutes alarm, so we still have some time for questions. Hi. Thanks for this presentation. And one other question is you quite often see such portals and websites. And what happens is after some years or some months, they disappear. So I would like to understand, do you have a funding model in place that we can assure that this service will continue or do you have the revenue model to continue this thing? So I think one of the things that have distinguished this project from the start is that the founding partners made a big effort for this not to be a project, a three-year project. The intention was never to have a three-year project. We know that if a digital product is not taken care of, it just becomes obsolete very, very early. So this is a very important part that you mentioned. So in the funding, it's not about how you get the money, but how you use it. So we knew that, first of all, it was important to release very early. It was important to show the potential of this, to attract new partners. Apparently that worked. We started with 10 partners. We are 90 now. And all those partners do efforts in expertise, in data, in infrastructure, and also in funding. So that's very important. Getting people together, how do you get people to come by evolving the tool every day? This tool has had maybe 20 partial releases since it was born. It has been redesigned. It's a perpetual prototype. We know we can't get it perfect, but if we get it early, it's going to make a difference. And that's what we try to do. We try to release very early, even if that means that sometimes you have certain products not working perfectly, but you show the potential and you engage people. And I think that's the most important part. Thinking about, we're not thinking to build a tool that's going to be, which development is going to be closed after the first stage of development. We are iterating on a prototype perpetually. Okay, I have a final question, maybe. You're looking into the why of the differentiation process quite a lot. If you make statements about that, that's quite a responsibility in regard of data quality. So you say you start to look into data quality issues regarding crowdsourcing. How do you deal with that at the moment? So yeah, I'm not going to deny that. There's a lot of issues around that data quality. My point is, we didn't have that data before. This data may not be perfect and it may need to be validated by someone, but it's data and it's an eye in the sky. It's not asking a government to tell you how much the forestation happened. It's data that hasn't been manipulated by in any way. So of course, data may not be perfect, but of course, it means something that is there because then someone can go and validate it. For example, that login road that they were showing there, if it didn't exist, okay, fair enough, it doesn't exist, but most of the time they do. We may go wrong about this exact date maybe or the exact pixel it should fall in. Another thing, this is not intended to point fingers. This is not intended to go to companies and governments and say, hey, you're doing wrong. Actually there is a sub-site called Commodities and there's going to be a new one called finance.globalforcewatch.org that are intended for companies to actually use this to help these tools to reduce the forestation in their supply chains, to understand who they are, let's say IKEA should understand where they are using, comes from and if that involves any of the forestation risks. And when they understand it and they have these tools, they probably are going to be more conscious and make decisions regarding the forestation. Okay, thank you very much.
Global Forest Watch (GFW) is an interactive online forest monitoring platform designed to empower people everywhere with the information they need to better manage and conserve forest landscapes. Thanks to open data, GFW is able to do the following: Monitor when and where forests are changing. NASA’s freely available Landsat and MODIS data has allowed hundreds of scientists and researchers to develop innovative solutions to monitor landscape changes. Algorithms are now used to process and analyze this remotely-sensed data to show when and where forests are changing with surprising precision and speed. Understand why forests are changing. Open data showing boundaries of land allocated for specific purposes, such as commodity production and conservation, as well as land management, allows us to understand why forests are changing. Are trees being cleared for palm oil? Are certain swaths of forests still standing because they are managed by indigenous groups? Gauge the significance of deforestation. Additional open data provided by research institutions, governments, and others is used to understand the implications of deforestation on biodiversity, climate change, and provision of ecosystem services. For example, was a recently clear-cut area of forest home to endangered species? Was it a carbon rich primary forest? Spark further innovation. GFW’s open-source code and APIs allow others to leverage GFW’s analysis tools and open data to create additional forest monitoring and management tools.
10.5446/20352 (DOI)
ParaFor. Per body man, hello, welcome to this third and last session. of today, we will have three talks they are all about web environment first talk will be given by Radimir Akafonkin about the map box GL. Please. Hi everyone. So I have 100 slides in 20 minutes so I will be very quick. I am going to talk about how vector maps work. I am a rock musician from Ukraine. coronavirusin, mukaan menissään erin Premieringin j расскoi misschien noodalulla astoon testarvun tuohon requireMichk slehtyys niin läht billion l period jellinen викottavasti evolved Visa K 어디 t 느�äjelyä, menee nyt omneytisasetاخ Architecture an� unto K comprendeian sai mehän hiilijoonan neteläost! Ne ja saataisivat artworkti neighboritun 2017 si Review Pro – ― sinnan H1D 420 lacksakaisin on tehtävä s塊is uno wasn ― ammoutturme газin But now we're slowly witnessing the coming of the new era, the vector maps era. You can see all the big players in the mapping industry that's built full mapping platforms, work on vector map rendering technologies. Some of them are proprietary, like fully closed. You can see how it's implemented, like Google and Apple and here for example. Some companies are really nice and awesome and work fully in the open, like MapsMe, MapsMe, MapBox and some others I haven't yet mentioned. And now I'm going to talk about MapBoxGL and how it works under the hood. MapBoxGL is fully open source. It's a platform for both mobile devices and browsers. So it works on both browsers and in cars. You can embed it so you can see it on iPads, iPhones, etc. It's based on open data. It's like OpenStreetMap. And the first advantage you're going to see when you use a vector rendered map is that it has smooth zoom and rotation. And it just feels so much different when you're using a map like this. So everything is transitioning smoothly. Like labels appear. You can rotate the map and the map adjusts. And it just feels so much better to use than a traditional Raster map. So that's the first advantage. And you can do really fancy animations, like fluidly, like changing views like this. And there's actually no concept of integer zoom when you're doing vector maps. You can render at any fractional zoom levels. And we have a special style language where you can define interpolations between values. So you can define how a root width gets wider as you zoom in smoothly. Another advantage is full control over data presentation in real time. So you can change any aspect of the map at any time. This is just an artificial example that you can change colors, you can hide and show objects, reorder them and do anything you want with the map, change any kind of visual representation in real time. And you can animate between values. And it opens up a whole new host of possibilities for making new interactive experiences with maps, which is really, really powerful and not something we could do before with Raster maps. With Raster maps, Raster tiles, you would have the base map and some stuff on top that we call markers, overlays. But there's no distinction between overlays and base maps in vector maps because you can change everything. And you can do some really fun stuff like dynamically changing the light direction in a terrain render map, which is like you have to closely to notice this, but it's really cool. And another big advantage is that any object in a map that you can see can be interactive. You can make any object that is rendered, you can make it clickable and hoverable with a mouse, or you can do any kind of interaction with it. And you can make roads interactive, buildings interactive, everything on a map can be interactive. And this can enable really interesting applications. And the previous advantages that you can change everything on the fly. And yeah, with this you can create a visual map editor. You can do an interface like this, you just click on stuff and change colors in real time. And this is a really powerful way to do cartography because you have instant feedback. And you can design maps in minutes, you should try this, like it works in a browser. You can do this with Mapbox Studio. And the style language is really powerful, it can do really differently looking maps with it. And you can do stuff that you couldn't do before, like video maps. So this is an example of a video from a satellite on top of an interactive map that you can drag and zoom. And there is a video, like something you couldn't do before at all. It's very cool. And of course you have perspective and 3D capabilities, like popular view for in-car navigation, where the map is slightly tilted. And of course there are 3D buildings coming to Mapbox GL. And we'll start working on terrain rendering after that. So another advantage of vector maps is that it uses much less bandwidth than raster maps. This is a chart I made when I used a fixed screen and zoomed from zero to the maximum zoom. And this is a comparison, like vector tile based maps load many times less data than raster tiles do, especially if raster tiles are rutina based. So we have two main repositories for this, the JavaScript implementation of rendering Mapbox GLGS. And the native one, which is mostly written in C++ with bindings for iOS and Swift and Objective C for Android, like with Java and all those platforms. So these are two implementations that we work on in parallel. And now I'm going to talk about WebGL, like how we do it on the browser side. If you look at the WebGL support, how many users are actually capable of using WebGL applications. And actually WebGL is supported among vast majority of users, like 95% for USA and globally almost 90%. So pretty much all users can now use WebGL powered applications. So you don't need to worry too much about leaving people behind that can't use it, because now it's pretty much everywhere. So logical question arises, why WebGL is not used very often? Like if it's so powerful, if you can do really crazy things with it, like why isn't it everywhere yet on the web? And why this is given summarizes? The truth is developing WebGL based application is extremely hard. We've been working on Mapbox GL for over three years with a big dedicated team, and we're just getting started. So much more stuff to do. It's really, really hard. There's a common misconception that OpenGL is a 3D API, some kind of magical API, where you define 3D objects and it just renders really fast. But in fact, OpenGL is a low level, two dimensional API. So basically to simplify things, all OpenGL can do is draw triangles. So we can define OpenGL as a technology for drawing triangles really, really fast. So WebGL allows you to write two types of special programs directly for the graphics processing unit. One type of program is called Vertex Shader, where you can write fancy math to transform vertex data. Like for example to project 3D vertices into a screen, into screen vertices for display. And another type of program is Fragment Shader. It transforms pixel colors. You can do some fancy calculations to determine what pixel is drawing what color. And basically that's it. So let's for example look at how to draw lines in WebGL. And like you would thought it's what is simpler than drawing a line. But we want to draw nice lines that are smooth around the edges, which is called anti-aliahist lines. And to draw a segment of a line in WebGL because it can draw lines, it can only draw triangles. A simple approach would be to draw six triangles. Two triangles would be solidly filled and the triangles on the outside would have the smooth gradient to make the edges smooth and anti-aliahist. WebGL also, an OpenGL basically, has a feature where you can assign special custom attributes to each vertex to do fancy math with. And in the end you can draw a segment of a line with just two triangles, which is three times faster, but with a slightly more complicated math behind it. But this is just one segment. And if we are talking about polyline, there are three types of line joints and three types of line caps that we also need to support for full blown cartography. And for example to draw a round line joint, we just need to draw a lot of triangles. So drawing polygons is much, much harder than drawing lines because polygons can be complicated, they can have holes. And OpenGL can only draw triangles really efficiently, so we need to turn polygons into triangles. And it turns out this is a really complicated computational geometry challenge with tons of years of research put into it and there are no really efficient algorithms for this. And we couldn't find a great triangulation library that is fast enough for use case on the JavaScript side. So I spent many months of sleepless nights working like researching papers and in the end we developed a new library called EarCat in JavaScript that was really, really fast at triangulating polygons. And then it was so good that we boarded to C++ and now we use both libraries. Yeah, so that's drawing polygons but drawing polygons is really easy compared to drawing text. This is just, drawing text is just insane. So OpenGL can draw text like you're used to. If you're used to the Canvas API in the browser, you just say like draw this text in this font and that's it, this is drawn. It's really, really cool. And in OpenGL we have to render the so-called font textures on the server side where we render every letter into an image. And then we draw two triangles and fill it with a small part of the texture to draw one letter. And we need to load these textures depending on what areas of the map we view and what letters we need. We have a font server that serves special Unicode intervals. And for example for Latin characters it's simple. Most characters are in the 0 to 255 French. You only need to download one file to display most of the text for a certain font, every regular. But for example for a Chinese map, like there are thousands of symbols in a Chinese alphabet. So it's really hard. But it gets much more complicated when we consider changing size of the text and rotation and drawing hollows. Because if we just draw simple font textures and try to resize the small bitmap it gets blurry. And when you rotate it it gets blurry too, not very good quality. So good guys at Valve, a company that brought us Half-Life game and are currently hopefully working on Half-Life 3. They released a paper about signed distance fields. It's a technology that is now used everywhere in computer graphics. It allows you to encode vector shape into a texture that looks like this. That encodes distance to the outline in each pixel. And a special algorithm that allows you to draw edges of this shape at different sizes really crisply. So we use this technique and our font textures look like this. So we can render them really nicely. But there is another problem because OpenGL has a limitation of how big of a texture you can use, like 1,224. And if you start browsing a map with a lot of characters like in China it just fills up and starts to break rendering. So we have Brian Housel, that is also a contributor to OSMID editor. He wrote a shelf pack library that implements a cool algorithm that dynamically fills up a texture depending on what letters he uses in text. And we ported it to C++ and it's working well. But there is another problem that we haven't even solved yet. It's rendering complex scripts such as a rabbit. And it's just so crazy hard that... Yeah, it's a topic for another big talk, maybe one hour length. So then there's loading and processing data. Data is loaded as tiles like you would do in Raster, but instead of like an image it loads vector data. Vector data is encoded in binary formats called protocol buffers. It's very compact and we want it to be as fast as possible. So we wrote two awesome libraries that are the fastest protocol buffers encoders and decoders. So it's pbf4j40 for C++ and it makes the data four times smaller than JSON, and seven times faster to decode than even the native JSON parts. So it's really, really fast. And we have an open spec, a vector tile spec that defines how the data is encoded. Exactly, and this spec is openly developed and it has been adopted by other companies like Esri. Esri is not using the technology for their vector rendering and Mavzen for 10 gram. And then there's placing labels problem. When you have a lot of labels and you need to place them nicely so that they don't collide. So when you zoom in you can notice that they are placed around curved labels. And this is a separate, really big challenge. And this is how we make the labels not collide. We cover them with small squares and then we put those squares into a spatial index. And then all the collision checks are really fast. So the fastest spatial index in JavaScript is called Erbush. And we also use a special index called grid index for MabuxGL that allows us to transfer this index in the form of typed arrays, which I'll mention a bit clearer. Then there is, this is an example, folding 100 something megabyte of data dynamically in the browser. And seconds just flows and then you can browse it seamlessly. And this is all happening in the browser without server at all. And this, there is a very cool library behind it called Jujsenvt that cuts data into tiles. And then there's point clustering, it's a recent feature that clusters points. And we wrote a library called supercluster for that. That's really fast and can cluster like millions of points. And you can even kind of fake a heat map with it. And there's a tricky thing about doing all this computation algorithms because you can do it on the main thread because it will block the rendering and it will stutter. So you can do it, you have to do it in separate threads, but then you have the problem of communicating between threads and you need to turn everything you communicate or the data into arrays of numbers to make this efficient, which is a separate challenge. Also we have vector tile specification, which is a whole language that defines how to do visual styling. And it's used by our editor and you can do very different looking maps with it. It's really powerful. And we have to make sure that rendering is the same across all implementations and across all platforms, be it iOS or like QT or anything else or the different browsers. So we had to write a very extensive test suite that has sets of images and renders. Every property of the style spec is rendered and compared to the expected output. And so we had to write a special library for comparing screenshots that would take anti-alesing differences into account because different platforms can render things a bit differently. And it's really fast, really nicely working. We use four different platforms for continuous integration to cover all the platforms we use. And the question arises, was there on the server side? Well, I won't talk about that. Yeah, so this is it. And another thing I would like to mention is that today marks 25 years of independence of Ukraine. And please drink a lot of beer for Ukraine today. Thank you very much. Very good. Great talk. Are there any questions? Hi, thank you very much. Just going back to the way that you transmitted the data you said, there are no zoom levels anymore in WebGL and how you transmit vector types. But I assume that still you would have to make a decision what data you transmit. So if you're pretty far zoomed out, you probably won't transmit the single building there. How is that done and how do you determine that on a server side? Yeah, so data is still preprocessed for each integerism level. And then after we transferred it, we just interpolate it and smoothly transition between zoom levels as we zoom in. But the precision we can pack into, like one zoom level is enough to display the map on the next zoom level. So by the time you zoom in a new tile load, so you don't have a problem of losing precision. One little after question. When things overlap with vector tile, how do you handle that? So if a street goes across line multiple vector tiles, is that split up? Yeah, it's just split up and we have buffers around tiles to make sure that features smoothly transition to each other. There are no seams between them. You mentioned quite a lot of problems you need to solve with web maps here. Label placement and all sort of stuff. Are you also participating in web standards to improve things, to make webjail better, to handle font better? I know there is development in webjail 2.0. Well, unfortunately I only have 24 hours in a day and two kids. Working on standards is extremely hard. I'm just not the type of person that would work on standards. How do you do that on a webhook? There are a scene who basically invented with a few other people, the UJSN spec. There are others pushing on standards, but we're focused on cutting edge implementations. The current state of affairs is that standards follow implementations rather than the other way around. Some browser like Chrome implement some cool feature and others say, oh, this should be a standard. This is how it happens rather than the other way around so that it can progress really quickly. How do you interface with things that are inherently raster? You showed some examples where you had imagery and video and things underneath. How is that handled? Webjail can handle raster tiles as well and video. We can just load it and display it as usual and scale it in addition to metrics. Thank you once again.
Mapbox GL JS is an open source library for modern interactive maps, powered by WebGL. Developed for more than 3 years, it combines a variety of sophisticated algorithms, smart ideas and novel approaches to deliver 60fps rendering of vector data with thousands of shapes and millions of points. In this talk, you will find out how it works under the hood and why it's so challenging to build dynamic WebGL applications. The talk will cover scalable font rendering, line and polygon tessellation, in-browser spatial indexing, collision detection, label placement, point clustering, shape clipping, line simplification, sprite packing, efficient binary data encoding and transfer, parallel processing using Web Workers and more!
10.5446/20349 (DOI)
Good afternoon. So welcome to this session. I think it's going to be very interesting. And we're going to hear some developments about spatial databases. So we're going to hear about different NoSQL paradigms and also a little bit of cloud architecture. So I start with the first speaker, Volk Amisha, who doesn't need a lot of introduction, really. So he's been collaborating in a lot of open source projects, but he's probably best known by his special extension for Couch TV. So I'm going to give the word to him and I'll be here sitting controlling the time. Thank you. Hi, everyone. I'm happy to be here. It's kind of a premiere because I've spoken at the last five phosphogies about GeoCouch. This time I won't speak about GeoCouch. But pretty similar. The talk will be a bit technical, but still a lot of fun, I think. So I got already an introduction, so I skip this and go straight to the topic. So it's about an archery implementation for Rockstibi. And I start with Rockstibi because I guess not everyone is aware of what it is. So Rockstibi is a key value store. And a key value store is a system where you can store data. And what they always support is something called key lookups, which means you store, for example, you have a billing system, and then you store your invoices, and then they have a reference ID. And whenever you want to get the invoice out of the system, you just use the reference ID and get it back. Many key value stores, not all of them, but Rockstibi does support range queries. So let's say you store weather data and you store every day the temperature. You can say, give me all the temperatures from the past week. And those key value stores are often a building block for bigger database systems. So for example, if you have a huge distributed system, and the backend, it's normally a key value store that stores the actual data. And of course, the question is, well, there are plenty of key value stores out there, why is Rockstibi so great? It's a real open source. With real open source, I mean, they don't just publish the source code and don't care about the community, but what they do is they have real open development. So you can even see their own internal code reviews when they do changes. You see all the changes they are doing. And you can also contribute yourself. So even so you can make a change on GitHub, they will merge it. It can even happen that it breaks their system, which I did in the past week. But then just revert the change and everything is fine again. And they have contributors, they have individuals, but also companies. And what was the most surprising to me was last year, they had started that they had contributions from Microsoft. And what they did, they make sure that Rockstibi works on Windows well. And with working well, it's not only about compiling, which basically most projects care about, but it's also really about the performance. So for example, even if there are changes that cause performance regressions on Windows, Microsoft makes sure to fix those issues so it performs well again on Windows. And as I said, it's a building block for a huge databases. So it can already be used as a background for MySQL, for MongoDB, and a couch base now has a full text search system. And there it's also plugable, so there you can also use Rockstibi as a background. And it's fast. And why is it so fast? So the main thing about this, why it gets such a performance is the LSM tree. LSM tree stands for Lock Structured Merged Tree, and it was originally described by a paper in 1996. And at least as far as I know, it kind of got a bit forgotten for 10 years. One occurrence is then in the Google Big Table paper where they describe that they use this data structure for the database. And then also a Apache Cassandra took up the system. And this was originally created by people from Facebook. Then in 2011, Google published DevilDB, which is again a key value store. And they use it within Chrome for their offline storage stuff. And then Facebook came and forked DevilDB and created Rockstibi out of it. So you can see Rockstibi as an improved version of LevelDB. And as I said previously, the improvement is not only on the code level, but also on the community level. So previously, the LevelDB developer wasn't that open. It's getting better, but in the beginnings, it wasn't that open. All right. So what is a Lock Structured Merged Tree? It's a data structure, as I mentioned. And the original paper talks about it's managing tree-like structures. Google did something else. So therefore, BigTabric Cassandra and also Rockstibi use a thing called SSTable. It stands for sorted strings, which means you have flat files. And within the file, you always only store the key and the value, and then the next key and the value, prefixed by the sizes, so you can easily read it in. And you can also write it very fast. This how you really store the data on disk. And but there's more. And it's easy to explain with an example. So in this example, you see potential SSTables, so they are still empty, just small boxes. And we want to add data in there. In this case, to keep it simple, the keys I insert are just numbers, and we don't care about the value. So I add the number 8, and let's see if it fits in. Yes, it fits in, so we are done. The 8 is inserted, everything is fine. Now we add another number. And now we find out, oh, well, the first SSTable is full. So we just take it out and sort the results, because as we've heard, SSTables stands for sorted strings, so we sort the result and try to merge it with the next level. The next level is empty, it fits in, we are done, everything is fine. Next we insert another number, the 5, it fits in. I think you slowly get the idea. We insert another thing. It doesn't fit in, we sort it. Now it gets interesting, because now we try to merge, and again, it doesn't fit in. And we sort the results again, try to merge it with the next level, it fits in, and we are done. So we add a 2, so I think you get the idea, this is how it goes on. And now you probably want to, like, that's super complicated to just get some sort of thing, you can just, yeah, probably do it simpler. So what are the advantages? The advantages before coming to the details about it is that the key part is that the data is in the steps in between is always sorted. And merging such sort of data is very fast and efficient. I'm going to take an example, and so after all those examples, you hopefully get the idea like why this might be a faster system than other systems. So in this case, we have again those numbers, and now we merge a level from the inputs, so the top part is the one level, the lower part is the other level, and we want to merge them. As you can see, they are already sorted, because they are always sorted. Now we merge them. So we compare those numbers, it's a 1 and a 4, and we just take the smaller one of those, and this is our result, the smaller value. Now we can move on on the input and compare the other two numbers. So we now compare the 3 and the 4, we use the smaller value as the output, and we move on. So now we have the 4 and the 5, this time from the other end, we take the smaller one to the result, move on on the bottom, and this is basically how you move on. You always compare the values, use the smaller one, and get a bit further. As you can see, this is kind of a streaming process, so you don't really need, if you implement it, you don't need much memory, you can really just move on in a streaming fashion and efficiently read out the data and also store it on disk. All right. Now this is, this merging is super efficient. What we also get is that those files never change. So once you've merged the data, we have a static data set. The static data set is always great for building up index structures. And in our case, we build up an R tree, and bulk loading such an R tree from bottom up is way faster than inserting into a data structure dynamically. We have no in-place updates, which is good because if there might be a bug in the software, you don't override the data, but it's just some data might be wrong at the end of the file, but not in the middle, because you don't over-read the file. As I said, the sorting is very fast. And of course, which I haven't shown is if you delete data, you need some cleaner process. But during this continuous merging all the time, you can just clean up the stuff during this merging step. So now we finally come to the GU part of it, because this is easy to show with numbers. And you can easily mention how to sort numbers, because you've learned it for ages. But the difficult thing is how do you sort two-dimensional data? There's a thing called space filling curves. There are many different ones. And one of those is the set order filling curve. And I use it in this case, because it's a very efficient one. It's easy to compute. It's easier than others. But it still gives you quite good results. So in this example, we see we have some data scattered around some space. But for now, we don't really look at the data. We just look at the space. So the question we now ask is, what's the sort order of this? Is it kind of left to right? Is it top to bottom? Is it somehow different? And to answer this question, we look at, first, only at the space. So we have just some empty space, and there are some data in there somehow. Now we divide the space. And you might now already get the idea. So now you have four quadrants. And we draw a line in there. And now it also comes clear where the name comes from, the set order curve. Because as we now draw this line, we can easily number those quadrants and have a order. So if we now look at the data in our space, you can see that we can derive an ordering. So in our case, the A is from the location. The A is smaller than the C, smaller than the B. And this way, you can easily sort your spatial data set. But now what happens if you add another data in the same quadrant? It could, of course, say, well, it's the same. It's the same quadrant. It's D equals B, but this is not very useful. But what you can do is, and it's another nice thing about the set order curve, you can just split this quadrant again to the same thing. And now you can see that D is smaller than B with the set filling curve. OK, now we come to my final example. Because this is already like some computer science stuff. It was more or less an introduction into computer science data structures. And for those who are into the two spaces stuff, it probably wasn't that exciting. But now I come to a thing which is interesting because it is a problem that I haven't thought of when I wanted to create the LSMRG. I was in luck because recently, I think three years ago, there was a paper from a university in California and they did a paper on LSMR trees. And there they solved the problem and it's just mentioned in a small sentence and in the implementation. And it isn't even really mentioned in the paper. But I think it's a great idea and I want to present what they're doing. Because the problem is, how do you merge then? So if you've sorted the thing, how do you merge two files? I first again describe the problem. So you have two data sets and you want to merge them. You can now, of course, sort them as we did previously. But the problem is now is they have only a local sort order. So you can say within those two data sets, you know how the data is sorted. But how is the relation between those data sets? Is A smaller than B or is B bigger than A? It's hard to tell. One solution would be to just create another space around all your data and sort it again. But that's obviously super inefficient because you would always need to resolve the data. This is not what you want to do. So yeah, this is basically what I already said. So it's only sorted locally and you will need a lot of recomputation. The solution is to just divide all the space you have. Now you may wonder, well, the space is infinite. How do we divide an infinite space? The good thing about computers is almost nothing is infinite. And so there's a solution to this. And I'm pretty sad that I did the presentation already a week ago because at the code sprint I was talking to someone about floating point numbers and so on. And so I will probably change it and won't use 64-bit floating point numbers for my geospatial data structures. But it would work the same also if you would use integers or fixed point numbers. It would work the same because numbers on computers are limited by a certain size. So in this case, the space is pretty huge, but you just take the whole 64-bit range for floating point numbers and you keep dividing the space. And so now we have in this whole space, like the full numbers the computer can handle with 64 bits, we have the data somewhere. It's so small, so I just put it as a point. But it's like there's a lot of data in there, but it's so small you can't really see it. It's just somewhere there. So you just divide the space that you currently have and look at it. So now you say, okay, it's in the first quadrant, but you can't tell within the data how it is sorted. But what you do is you just look at this quadrant, divide it. Now it's in the third quadrant, so we keep dividing and keep zooming in. And so now we zoomed in the third quadrant, we still can't really see the data because it's so small and the space is still so big. So we keep doing this and in the end we assumed so closely in that we now on the final division we have a sort order. So the idea behind this is you look at the full space and kind of zoom into your data and but as all your data has the same set curve, you can now easily merge those two data sets together. So this was a lot about data structures and all the stuff. One reason is that I thought I would code this within a week and if a developer says it takes a week, it's probably, it will probably take a month. I've spent two months. It kind of works, but not really good. So the current state is I think I now have a good understanding of ROXDB, which turns out the internals are way more complex than I thought. And it's a free time project, so I can't spend much time on it. There's some code already uploaded, so it should compile but not, can't really query it I think currently. But I really want to keep it going. The good news is that obviously a question you could ask is this whole thing a good idea? Why doesn't Facebook already do it? The good thing is I was at the ROXDB meetup last year in the US and they asked a few core developers and one of those said that, yeah, he thinks it is a good idea, I should try it and we'll see how it turns out. So for the future, best obviously would be if my changes would be merged upstream with ROXDB and it would be then supported by them and you could store spatial data in there. Really good would be if I can convince the ROXDB core team that they at least changed some APIs so I can just make an add-on to it and don't need to change the customer of ROXDB. Still okay would be if it's an easy to maintain fork. Let's say I need some internal functions that they don't want to expose publicly, I can just call in and have a simple fork. And of course the worst case is they don't care about it at all, I need to stay on a full fork. But that's okay as well, that's well. Then it is like this. All right, so that's all I have for now. You can check out the code or if you have any ideas or better ideas on how I should solve the problem or want to help, let me know. Thanks for your attention. Thank you. So thank you very much for this interesting talk, now we have time for a few questions. Actually we have some minutes. Okay, first question. Yeah, okay, so the question was I repeated so everyone also on the screen can hear it. And the question was if I also looked into the Hilbert curve and if I did if there were performance issues, I did, the reason why I looked into the set curve only is that there's a paper about query adaptive, sorry, it's called query adaptive sorted bulk loading or something. And they basically did the work and they compared the set curve with a Hilbert curve and they said that the Hilbert curve creates a better data structure kind of but it's not worth the performance overhead because the Hilbert curve is way more complex to compute and the other reason is that if you go into higher dimensions, the set order curve is still very simple to compute and the Hilbert curve gets really complex once you get into more than four dimensions. That's the reason for the set order curve. Okay, any more questions? I just like to ask you, do you have some benchmarks of this algorithm? Yeah, so I haven't done any benchmarking because it's hardly working yet. Of course it will come but I would so from my, what I think is like the SDLSM stuff with the static part is so suited for outreach and multi-dimensional data, I would expect it is probably the fastest thing you can get. It might not be super fast but I don't think you can get much faster if it's probably implemented. We'll see what works out because ProxDB I think does a really good job on the performance side and the benefit is also why I choose ProxDB as I said, it has such a big community so even there you can expect future performance improvements and basically get them for free because if they improve something, yeah. And so ProxDB is also really run at Facebook in production so it's not like some toy project from someone but really they run parts of their infrastructure really on ProxDB so it's really a well-working database. So if there are no more questions, let's find speaker once more.
This talk is about implementing a R-tree on top of RocksDB, an LSM-tree (log-structured merge-tree) based key-value store. It will give an introduction about how RocksDB works and why LSM-trees are such a perfect fit to build an R-tree index on top of them. Finally there will be a deep dive into the actual R-tree implementation. RocksDB is a popular key-value store by Facebook (based on Google's LevelDB). It supports key and range lookups and basic spatial indexing based on GeoHash, but not an R-tree implementation which makes multi-dimensional queries possible. Those queries can combine things like location and time, but also any other property that can be represented as a numeric value, such as categories. This makes it possible to query e.g. for all flats with a certain size in a specific area that are not older than a few years and have a balcony.
10.5446/20347 (DOI)
Okay, welcome again to the next presentation by Mr Attila from Norway, I suppose. Yeah, he'll be talking on key and risk. Please welcome Mr Attila. Thank you. I'm trying to present a project that I finished almost a year ago, so warning, there will be obsolete things here, things I'm not proud of now, but I'm still kind of proud of this project I did. So let's try. Myself, Attila, I'm a software developer, mostly web mapping. This project has a GitHub page. Check that out. So KN what? The KN stands for culture and nature and Reise, that's a Norwegian word, meaning something like a trip or a journey. So the main idea here is a journey through culture and natural data. So it's a collaboration between a bunch of Norwegian governmental agencies. This collaboration was discontinued half a year ago and was re-sponded as a new organization called the KLab. So the main objectives of this collaboration was to work towards increasing access to and use of public information and local knowledge about culture and nature, typical political stuff really, and to promote the use of better quality open data. What I boiled this down to is collect, digitize, georeference and categorize data and publish it in open APIs. That's kind of cool. But the problem is an API is not sexy. They've spent, I think, two or three years collecting all this data, cataloging it and so on and so on. And in hindsight, they thought, well, we have to present this to the public in some way. So their main idea was, well, we have to display it on the map some way and display also the metadata for this. And a small little detail, make it interesting. So what I'm going to cover is what we did to try and fulfill this task. Another cool thing about it is we suggested that all the code we make for this application should be open sourced, which they agreed to. So what we made is, well, this is an overview they have used to show what we made, actually, and this is also what I'm going to present here today. But we have to start off with some constraints. It should be web-based. There was a requirement that there should be no server-side components at all. We should only use the existing APIs. A couple of months ago, I read about this new bus word called serverless architectures, and I think this is the definition of that. We should use open-source components as where possible, and we should open-source the result. Thing about open-sourcing a project like this, for me, which is not an open-source developer, per se, that posed some interesting challenges. I had to think a bit more about my code, how I published it, and so on. So where is the data? I mentioned APIs. There was originally, I thought that, well, all the data should be gathered in one API, called the Nevegiana API, which is an original version of the Europeana API. But that's theory. In practice, you end up having a bunch of different APIs because it's difficult to collect something in one place. And as we went on, more APIs were added. And as every developer knows, this is complex. It brings complexity to the project. Because you have different formats for data, different projections, different operations that the API support, the geometries will differ, and the schemas for the data themselves will also differ. So this is difficult. And because new APIs were added during the development process, I couldn't spend all my time copying all these APIs. So my thought was, well, black box it in, make this a component I can reuse in a simple way. So what I did was define an wrapper around all these APIs, define three or four methods for getting data, and define GeoJSON as the output format for all of these APIs. So this was a small wrapper written in JavaScript, which I could then use in the rest of my application. But this gave us a perfect world. No, it didn't. Because the schemas varies too much to try and normalize them. So what do we do with that? Well, you can extract some common attributes from the data, like the title, a thumbnail link, and then use templating for the definition of how data from different datasets should be presented. And these templates had to be tweaked manually written for the different datasets. So this is the data. Then there's the client. How should we present this to the users? Well, web mapping application in 2015 isn't rocket science, so I'm not going to go into the depth of it. But there were some requirements, so you should be able to see what's around you. We had to support clustering in some way, because there are, for some reasons, much of these documents and data is clustered in, well, digitized, georeferenced at a single point in a city, the center point, for example. So you have to do something with that. You have to have solutions for displaying information. And we also tried to experiment with some more novel ideas, more interesting ways of interacting with the data, not just zooming around in the map. So we had to pick our tools. This is what I think is the most fun part about Open Source is that you have all this large box of Lego you can choose from. And I put together to make something that really works great. So we decided early on that we should use leaflet, mainly because somebody from the Echo and Rice organization had already tried that and was happy with it. So, well, let's go on. Of course, they started talking about 3D in some way. We said, well, we can try out CSIUM. I hadn't tried it before. Worked okay for our needs. We used the DB from Karo. That's a really easy way to store data, especially in an environment where you don't have any, well, the requirement was no server-side components, so I couldn't install GIS myself. Then I decided on using plain, pure vanilla GIS, no frameworks at all. Kind of worked. Probably would have benefited from using some MV star framework, but turned out great. And then there was a bunch of small utility libraries for dealing with different formats, directions, and so on. And being a developer, styling is not my favorite thing. Then it comes to CSS. Well, I can manage to get it to work somehow, but it's messy. Hope. I was so lucky that the client, the guy from the client side from the ConRace organization I talked to. He was a proficient developer himself or designer, so in the new CSS, you could handle a lot of that stuff. And then there's Bootstrap, so you're covered. So some slides showing this application in use. This is an area in Throne 9, the city I live in Norway, and then we showed data from a couple, well, five, six, seven different APIs in the map using a thumbnail. Then you have another map, a thematic map showing content related to the Second World War in Norway. Here I've clicked one of the markers. I think it's the one up here. And then we get to that template showing the data. This is a video. So we could play that. This is our 3D experiments. We should have used aerial photos, but we have this requirement to use open data, then a region mapping agency doesn't provide a free, openly accessible service to access their aerial photos. So, well, deal with it. A fun thing is that all code for dealing with presenting the metadata on the sidebar is the same as used in a leaflet application. Then we tried doing something more, as I said, in a very ways of exploring the data. We focused on a line, the blue line here. When you scroll the map, you just move along the line and get info on what's related to that line. Let's not show if it worked out, but it was okay to experiment with at least. One more thing, which I think is kind of cool, is after working with this for a month or so, I started to realize that parameterization is key here. So all my code started to just take simple configuration inputs. And I thought, well, that means we can make a generator for this kind of stuff. We made a simple web page where you just click on the area of interest you want, the data sets you want displayed, and some small metadata, then generate a URL. And that URL is parsed by the application and used to display on the map of a selected area containing a selected set of data sets. So that means that with no coding at all, you can set up your own map. That's kind of cool. Then there is documentation in a perfect world. All developers write a lot of documentation. Every project is easy to understand. In reality, that's not so. We load writing documentation. Well, I realized if we should have any hope of getting somebody to use this, it should be documented. So I did my best. What I did was every time I got a question from the Code and Rise organization, how does this work, instead of replying by mail, I made a markdown file describing what they should do and asked them to review that. It was this solve your problem. So I think we're OK on documentation at least. We also made some CodePen interactive demos showing how to use the code in simple ways at least. So that's available. Another thing is that I think the workflow here turned out to be kind of open source, the open source way of developing software. We were, the client was based in Aslo. I was based in TronLimes. We didn't have daily meetings at all. Most of the communication was by email or issues on GitHub. We had some Skype meetings and two or three in-person meetings. I think this worked really well. So go on and try that way of working for a small project. I was fortunate enough that our team was small. It was me and a couple of colleagues of mine for some time. And I had a really technical, knowledgeable customer that really works. It helps. And the process here was really open-ended. They basically came to us to me and said, we have this data. We want to do something with it. We think you probably know more about this than we do. So could you please try and make something cool? And we had status discussions each every day. I got some ideas later nights and an email of got a response. Well, spent a couple of hours, try, see if it goes somewhere. I'm going faster. Key takeaways from this project. Cores, the support for requesting data from third-party websites, is key to making a serverless architecture work. Unfortunately, not many APIs support cores. So I think I can have enough hands to count all the times. Could you please enable cores on your APIs so that I can use it? Another takeaway is plan for complexity. Realize that in somewhere down the line, things will get complex. Try to structure a code so that you can deal with that. At the same time, keep it simple. Keep the simple code simple to use, simple to understand. That's a trade-off. Another observation I made is that the Sparkle query language is not made for humans. I consider myself proficient in SQL, but the Sparkle language blew my mind every time. I couldn't wrap my head around it. I was lucky enough that we had somebody developing the Sparkle endpoints who helped us out with that part. And of course, there are some unsolved problems here. For example, caching. Every time you zoom or pan them up, you have to reload everything. That's slow. I should have used more modern JavaScript technologies, browser-file-redpack or something, because the dependencies here, all these LEGO bricks I've imported, is kind of messy now. We should have made some deployment scripts or made the process of releasing this a bit simpler. Every developer said I should have had the tests to this code. But we were fairly limited in time. I think we spent 400 hours or something, and the focus was on new ideas, not perfect code. Recently, I discovered that the site we're running on does not support HTTPS, so geolocation does not work because of the new security updates. And as a final quote, I think this quote is really great, because the idea that if you open-source something, someone will step up and develop it further, that's a lie. The code is up there. If there's nobody assigned to do it, nothing happens. So this project has been, well, I've been hired to do some additions to it the last year, but apart from that, nothing has happened. So my hope is that at least something in there is usable for older people. I hope that the organization, Co-analyze and OKLab, continues to use this. They do, but there's currently no development on this. And I think that's a bit sad. So that's it for me. You find the code on GitHub, you find the application itself on the other address here. Thank you. APPLAUSE Thank you, Attila, for your very interesting project. Any questions from the floor? Yes, please. Hi, Attila. I was just wondering about the Sparkle endpoint. You could tell us a bit more about what kind of data is going, made available through that, whether it supports GeoSparkle, and also, crucially, does anyone actually use that endpoint? LAUGHTER Yes. You get the question. Well, as far as I understand, we did not make this endpoint that was set up by some of the organizations here. I think the Ministry of Cultural, something. They are developing it, using it. It's not a GeoSparkle endpoint, so we had to do some clever tricks to make spatial queries against it. And what kind of data is there? There is all this archaeological digging sites in Norway and finds there, so there is a really interesting dataset available. So I really appreciate that the data is there. I just have a problem with understanding the Korean language. That's all. You know what? Any more questions? Questions, please. You still have time. Seven minutes? No? If none, then please give a big hand to Mr Akula. Thank you very much for your very interesting project.
KNReise is a collaboration-project among Norwegian governmental bodies working with cultural, historical and natural data. As the project neared it's conclusion, and had gathered, created and geolocated a huge amount of data and published it using REST APIs the next step was to present the data in a uniform manner. We where tasked with making a client-side only, fully configurable, OpenSource web solution for displaying data from a number of different APIs. Using OpenSource components we where able to pull this off, and this talk will present both the product as well as the process.
10.5446/20346 (DOI)
Sistelee. Nanotaan ota즈 foundation brush.. Ja törten. Ja päätää, mitä maasii kaupattokokisteryhm skinnyvät. Niin.. This is always the problem when you come up, when you end up in the third person in the session, that the first two speakers have said nearly everything you were going to say by the time you get on stage. Fortunately I realised this before I arrived, so I wrote a different talk than the one I said I was going to write, because I nearly always do write a different talk than the one I said I was going to write. The other problem was I tested this talk at the UK Phosphor G talk session and the guy in front of me there came up with a completely new solution to the problem that completely made noise. The talk I had pre-written irrelevant, so I've written yet another talk. There's a lot of work in this talk. Let me make sure. To start off with, surely all web maps are pretty, we're all geographers, we're cartographers, we all maps are pretty, right? Of course some maps aren't pretty. If you want some fun, you're bored in the office or you're stuck on the train, there's a hashtag CartoFail, which will show you some of these brilliant examples of maps that clearly aren't right. I'm pretty sure that's not how maps work when you're filling up a country with Hispanic people. Or you could go for the ever popular rainbow colour scheme. This is telling us about heat, but somehow you have to know that this orange is slightly less, this brown in the middle, and then it goes purple, pink, yellow. It's terrible. You can't work out whether, you know, is this little yellow splodged in the middle here, the worst bit? Is it worse than these green bits that are nearly the same colour? Don't make maps like that. Equally, don't make maps like this. Okay, it's a nice distinctive colour scheme. You see the blue at the end, that's $1.4 million, the yellow next to it's 12, then it goes to 65, then it goes back to 22, down to 1.5 at the other end. That's it. And there's no normalisation. It's just raw counts. So obviously, Texas wastes more money than Pennsylvania does, because it's bigger, it gets more government money in the first place. She's what this is actually a map of. I just don't even know where to start on that one. And these are professional, this is the USDA, they get paid to produce these maps to tell policy makers where to spend money. And apparently France, I think. Or maybe everywhere except France, I'm not sure. Again, that's quite a nice one. I don't really know what it's showing me. I can't really work it out. It's just comparing random things from random countries divided by different random things. So don't make maps like those, everybody. If you want to make pretty maps on the web, there's SLD, which is a standard. Hardly anybody implements it, but it's a really good standard. Those people that do implement it have all added some extra bits on just to make it work better. So slide from, back when Bandles was still known as Open Geo. Basically, yes, nobody should be writing SLDs for robots and CSS is for people, so you should use CSS. I love that slide, it's brilliant. So that's what an SLD file looks like, you've seen quite a few of those. You can actually have to scroll down a bit. I've been scrolling in my slides to be able to show the whole of the SLD block. So you should use CSS. Fortunately, the previous two speakers have answered this point for me. It's not a standard. Everybody implements it differently. So just because you think you're going to write some CSS, you've got to go away and learn a completely different set of CSS if you're using Mapnic or if you're using Geo Server. As far as I understand, there are two different competing MapCSS styles that NoB supports. Kaske, yes, it's horrible. Plus the fact that it kind of assumes that you're a web developer. I don't know about you guys, but I leave actually making the rest of the website after the map looked pretty to the professional web developers. So I don't really do CSS. So it's a bit of a nightmare too. But it is nice and compact. I didn't have to scroll that one very far to be able to do the standard Geo Server population-style map. So you come to writing this stuff. You can write it by hand in SLD. I do. Because I'm a programmer, I like writing XML. I can do writing XML in my head. I've got an editor that's set up so that when I open a tag, it closes it for me automatically and does self-completion on tabs. And things are great. But most of the people, users that I teach to do stuff, web mapping, they're not programmers. They're geographers. They don't like hand editing. You can use a GIS, which is okay. Until you come to finding that you've got 500,000, you've got 500 different layers that they want styled separately. And that takes you forever in QGIS. Or you can use a program, which is brilliant if you can get a program that knows what you want. To understand what mapping is about. So those are the sort of people who should be allowed to write SLD. Essentially robots. So if you're hand editing it, use a text editor. Don't use word. Nothing does worse things to an XML file than putting it through word. Because you get those stupid smart quotes and Geo Server chokes when you feed them those. I've had people's years send me stuff saying, I've written this SLD on it. It's just the same way as you told me to do it in the course. Yes, you did it in word. It's not anything like what you typed any longer. Use Geo Server editor. It validates for you. It does coloring. It will point out where your mistakes are. Coming from 210, it will show you the map as you're going along. Okay, it's a bit annoying if you're doing it for any length of time. Because Geo Server will log you out or your web session will crash. You lose all your work. But... Oh, even better. So that's what anybody who hasn't seen the style editor looks like. It's all pretty colored. You also get these days a generate a default style button. Not enough people know about that. So if you just want a colored polygon, you click that button and it will give you a random colored polygon. Or you have to just change the color. And there you go. Nobody should ever start writing... Even if you're writing SLD by hand, you should never start with an empty file. You should make an existing SLD file and modify it. Because that way you can get the typos and spelling mistakes that James and I put in ten years ago. Propagating on through life. Brilliant. You can track them around the world that way. If you're using a GIS editor, the two main ones that I would recommend are QGIS and UDIG. QGIS, so good styling tools. It's got the nice shiny dock that Nathan's upset that can get mentioned in the new features of QGIS. If you haven't used 216 yet, live styling on the layers is brilliant. Doesn't do label export yet, as Andrea said. The compatibility between QGIS and GIS is getting better. It's still not entirely there. There are still bits missing. We're kicking around various ideas like whether we can make a QML to SLD converter. Whether that's actually easier than trying to fix up QGIS's SLD export. If anybody desperately needs to do QGIS to SLD, please come and talk to us and bring your checkbook. That's what SLD looks like. The QGIS styling layers look like if you want to. UDIG, it's got very good styling tools. It uses the same SLD generator as GeoServer does, so they're very compatible. You can even edit the raw or XML if you still want to. If you want to give interface and get bored with missing ram, you want to just gently tweak it. It's somewhat unstable. It's not terribly active development any longer. There are a few people still using it. It has a new release coming out this week, but it's some distance behind the latest GeoTools releases. Again, I'd love more people to jump on the bandwagon and start supporting UDIG. That's what it is. You've got a lines layer, a points layer, and a raw XML feed. Custom SLD editors. They were specifically written to generate SLD based on what geographers need. Shapefile viewer, it's a project that I started two years ago, because I had a client that needed to style up an enormous number of shapefiles as statistical data. I couldn't be bothered to do it by hand. Basically, you load a shapefile, you click on the attribute you want to style, you select your classification scheme, do you want equal classes, jenksies, natural breaks, all of the different classifications that GeoTools supports. You select the color-brew a drop-down palette, then you say, do I want a converging or a diverging scheme, do I want a monotonic scheme? How many classes do I want? Press Go, and it saves you out of shapefile. It's fun. I haven't done any work on it for a year or so. If anybody wants to fork it and do some work, great. I've left a list of issues that need doing. That's what that looks like. Basically, you've got a list of the attributes. You can say whether you want the labels turned on, do you want borders turned on, that sort of thing. It solved a very particular problem I had, but I thought it was maybe useful enough that somebody else might use it eventually. Here's just the guy that appeared at Southampton and did the talk before mine. He arrived. I never met the guy. He had posted a couple of questions on the GeoTools list, so we knew he existed. A guy called Rob Ward at SysSys. He had a similar problem. He had a client that was moving. It was the coldboard, and they had, again, 10,000 different layers in ArcMap that they needed to convert to SLD. He quoted them to a million pounds to do this conversion for you. They said, oh, there must be a quicker way of doing it. So they sat down and thought about it. He said, well, I could write this general tool that did it. He called it SLD editor. So interactive GUI program for writing SLDs. It was developed by SysSys. It applied for OSGO incubation this morning, because I told Jody that I'll mentor it. Providing that somebody on the incubator list has said plus one by now. It's an official OSGO incubating project. It's currently living at GitHub. If you search for SLD editor, it turns up. If I have time, I'll show you an actual living, a live demo. But you can specify the shape file you're interested in. You can edit the symbols. You click on the color. You get a color picker. You click on the rule. You can edit the rule. You can edit expressions. You can do the complicated filtering that you want. It's really good. I would love it if people came along and helped us out getting this up to production quality. There was the prettier maps bit. Despite having, for the last 20 years or so, passed myself off as a geographer. I'm not really a geographer at heart. I'm a geophysicist before I was a geographer. I was really pleased when I read Anita and Gretchen's book, QGIS map design. She lots of things about pretty maps. This is one of their examples. It's some natural Earth, GDP data. This is actually drawn in GeoServer. It's a web map. All I've done is convert the tips they give you to their SLD. Things like setting the background color of a map. Web maps don't have to have a white background. They could be gray or they could be blue. You can make them C-colored. Just add a background color equals and then another color. Again, choose a good color scheme. Don't choose a rainbow color scheme. You can use interpolate instead of using a dozen filters for your classes. You can use the interpolate function that we provide in Geotools. Here we go. It's at the top. We've got the fill and then inside there interpolates. It's based on GDP divided by population estimate. It's multiplied by a million. Then if it's zero, it gives the color, if it's a thousand, it's this color. If it's up to 5,000, it's this color. So on down the page. That's a lot shorter than rewriting that whole fill block and everything else for each of those filters. It doesn't have to be long and repetitive if you don't want it to. You can do things like this in GeoServer as well. It's perfectly possible to draw nice circles on the screen. It's a bit trickier than with QGIS because QGIS, you're drawing a static map. So you can adjust your cut sizes to match, fit in the countries. GeoServer, people are going to zoom in and out of it. It's an interactive map. So judging your circle sizes is a bit trickier. To say, if you need to, draw the layer twice. That wasn't what I meant. I've drawn it once with the gray outlines of the boundaries. The second time, I've used a dot symbolizer on the centroid of the polygon to put the circle in. There are geometry functions. So you don't have to precalculate the centroid. You can do that as you're going along. And we've got lots of maths functions in there. So you can calculate the sizes. So this is square root of population estimate divided by the square root of 10 million. Tie multiplied by 12. I had to actually go and look in the QGIS source code to find out how they did proportional circles. So I could actually match the original map. But it's perfectly possible. And we have square root functions and such like. This is one of them from the book. Again, I've redrawn it in Geoserver. It's a very pretty map, the Philippines. And they do these nice cartographically cute curved labels for the sea. So again, as Andrea said, Geoserver is quite happy following lines. If you don't want to see the lines, you don't have to draw the lines. You can label things that aren't drawn. So each of these seas, I've actually just sketched in a little curved line there to give in QGIS and saved it as a shapefile. And then just that label. But I've not drawn the line. Other things you might want to think about. Don't pick bad chloropletes. Both QGIS and UDIG provide you access to the color brewer palettes. The worst comes to worst, you can go to colorbrewer.org and find the color palettes. So don't use the rainbow color palettes unless you want Kenny's Fields to make fun of you on Twitter. And he will do. The other one always comes up. Missing and repeated labels. Why didn't my polygon get labeled? Or why is my polygon labeled four times? It's because you're using tiling and it's chopping it up. You could be four different Geoservers who've drawn those tiles. So none of them know what the other one's done. So use a centroid, specify where you want to put the label and allow partial so the label can go across the tiles. Finally, think about how your map will look and think about how it will be used. Because these are important points. This is actually, you're not drawing maps just for the fun of drawing maps mostly. Though obviously some of us do that. Mostly you're drawing maps to convey some information to your users. And you need to think about what that information is. Okay, that's how you can find me. This talk is up on my website. I'll tweet that in a minute as well. And then if I'm really quick, I could just about. I can find the mouse. Which side of the screen is my mouse on? I'll give you a quick live demo of the SLD editor. Okay, I might have to close. Down so you can see it. So, okay, as you draw SLD. But you can pick your data source. So you can see the results on a map. And that's your standard Geotools map. And then here you've got a polygon. You can edit the fill or you could edit the rule. So you can see the actual filter up there that I'm using. And if need be, you can edit more complicated filters. So it's very powerful tool. And it lets you choose, you know, you can pull things down from a remote geoserver. If you've got admin rights to that. Push the cells back up using REST. And if you've got an ArcMap license, it can read NXT files. And start to convert those into SLDs for you. It does a first pass automatically and then you can tweak them to actually make it work properly. Okay. And do you have questions? Hi. So does the SLD editor also convert the existing symbology from ArcGIS? I believe so. I don't have an ArcMap license so I've never been able to try it. I have a Linux machine so I can't run ArcMap. But I think so. Any more questions? I have a question from folks here on Dandelija also. So it's a nice project based on SLD. I know my colleague Oliver Erz is the leader of the special working group of OGC. And he told me he hasn't had too much feedback from the community. And even especially also from all those nice software developers who enhance SLDs. So you are all satisfied. We haven't even moved on to symbology encoding completely yet. But I know there are restrictions and of course there are those proprietary extensions for Chio server. And QGIS can do more than SLD. So it is standard somehow stalling. I'm afraid so. I was on the original SLD working group when I was back still working for an employer who worked for the OGC. Who was a member of the OGC. But apart from QGIS and Chio server nobody else is even attempting to do SLD. So there is no real effort, reason to put any more effort into making it compatible. Until somebody else implements the standard there is no incentive. Right. Well for us the thing is that our completely styling language and the engine itself is produced on SLD as materializing Java objects. So we are bound to it and that's why we keep on pushing it forward. Because it's much easier to just add extras to it rather than rewrite everything. Feedback to OGC always takes time. Actually you guys are just joined haven't you? Time is not something that we have in large amounts. If I had one feedback is that in order to extend the SLD we actually had to break it. It would have been nice if SLD had a clean way to add an extension as opposed to... If you have to do it by the rule you would have to create your own schema, extend the existing elements, add your own extra attributes. Then nobody would be able to read your file because you would need a client that understands that you extended the schema. So it's kind of ugly to put these vendor option tags which are not part of SLD. We invented them in order to get extras. So there are two things about SLD. One is to take a style from one system to the other. You know there will be loss of information so you can say well okay if I cannot represent 100% who cares. The other thing is export to get back. This is actually also what QJAS is doing. QJAS can export SLD but it can also import it. Don't know about import. Yeah, there is code to import SLD as well. So you can read an SLD style and bring it into the QJAS editor. And it's going to read its own vendor extension to round trip. So it's important to be able to easily extend the SLD. Right now it's not. Yeah. Is there still some question we have time for one or two? If not, thank you. Ian and the other ones. Let's give them a hand.
Web maps needn't be dull and this talk will show you how you can take your cartographic skills from the desktop GIS to the web using SLD and GeoServer. The initial part of the talk will introduce desktop tools such as QGIS and UDig and how they can help novices get started with styling maps. Moving beyond the basics it will continue with a look at the use of functions to modify the features being drawn. It will include an in depth look at how to control the placement of labels to enhance the readability of the map especially when using tile caching to speed up map service. The talk will finish with a discussion of using GeoServer's composite and blending modes to provide pretty effects that can enhance your web mapping.
10.5446/20341 (DOI)
So, welcome to this third and the last but not least presentation of the 3D track. It would be really interesting. A lot of immersion here when Vincent will show us the iTone Open Source software. Okay, thank you. So, I'm Vincent Picavetan, I'm going to talk about iTone which is a new WebGL, which is a new WebGL visualization framework. So, just a few words about Australia, we are an open source company and currently we're doing a lot of 3D stuff and especially working with point clouds and immersive visualization. So what's iTone? iTone is a new software which is actually not new. I'll tell you a bit about the story of it and to do 3D visualization on the web. So, it's a WebGL and JavaScript framework. It's same as 3D visualization and immersive visualization. And it's open source. The license is CCLB which is something which is really French oriented but at the same time you got the MIT license as well, so very open. On the technical basis, it's based on JavaScript, of course. It's based on WebGL because we do 3D in the browser. And one thing which is kind of important is that it's based on 3.js. 3.js is a famous library for 3D visualization and geometry management in the browser. It's used for a lot of different use cases, for video games, for advertisements, for a website, very dynamic 3D website for video clips and all. It's also a very large community and it's very stable and progressing pretty fast. So this is one of the main point of iTone, it's using this library. This is the same library that Portree uses actually. There are a lot of shaders as well in iTones which allows us for specific visualization effects. And so that's the technology we use in iTones 1 and iTones 2. And one note is that the iTones project is about the client side only, so it's really a Web framework. So we talk about the history of the project because it's kind of a strange story more or less and there are four steps to the story. So the first version, the really first version of iTones was born at the French National Mapping Institute which is IDN from a research laboratory which is called Matisse which deals about everything related to measurements in the world. So how do you scan and sense the 3D environment? And the first version of the iTones project was a flash application. Yes, I know. And so it's allowed to do panoramic images visualization and it allowed to annotate objects inside the images and to visualize point clouds, light up point clouds. So while they realized a few years later that flash was not really future proof technologies, so they changed to WebGL and GPU accelerated technologies in 2011. And the same with this first new version was to be able to visualize lighter and big volumes and as well as mesh support. Why was that created nearly because of the StereoPolice vehicle? StereoPolice vehicle is the background where we come from. It's mobile mapping. You probably know how the Street View vehicle, you've seen it. It's a vehicle with a lot of sensors on it and we can find some images, sensors, so cameras, you have some lighter sensor, you've got IMU which allows us to sense trajectory, you have a GPS and you can also sense the speed. So this is a vehicle from IDN that they use as a research vehicle, they also have a production vehicle. So lots of sensor images, you have different kind of sensor for images but that's mostly camera. You've got for LiDAR, there are a lot of different LiDARs as well and GPU and IMU. From that you output, you sense data, so you get the data from the IMU and the GPS and the odometer and that gives you a trajectory with position and orientation and you get one point, 200 points every second, so that's not very high frequency but still. And then it outputs also images and you get nine images because there are nine cameras every two meters. What you can see on the picture is every single point where pictures of images have been taken for a small trajectory. And then you have laser and the laser outputs something like 300,000 points per second. So the objective was to be able to visualize the data on the web which is kind of demanding in terms of volumes. So that was your main goal to iTunes and this is still the goal we are pursuing. So the open source project went on with this webGL framework based on 3.js and step three was actually last year in August, so that's one year ago. And IGN took the decision to make that framework open source. After years of the demonstration, not knowing what they wanted to do with it, they decided okay let's go, let's open source this software and at the same time they took the decision which is kind of official to that iTunes would be the 3D visualization tool for the national Geoportar for the next version which will be probably in 2017 or 2018. So that was August last year. They took the decision, they came to us and said okay how can we collaborate on that tool. We did some consulting to clean the code because there was a lot of non-licensed stuff for a few issues with globally the source code for the first version. And we had the first release of iTunes 1.0 on February this year so it took quite a time to clean everything and to get the release out. And during that time there were also refactoring everything and starting version 2.0. So the first com is date back from August last year as well for version 2 which is under EV development. So we have nowadays we have two versions of iTunes, the legacy one which is operational, you can do things with it. But it's more like a technology preview. We got the algorithm, we know what we want to do with it, we know the kind of visualization we want to achieve but as the code itself is not something we can really use for industrial projects, a lot of refactoring needed so that's why the version 2.0 was started and the version 2.0 is currently under heavy development and we plan to implement all the features of iTunes 1 into iTunes 2. So that's everything on GitHub. We are collaborating with IGN as an open source project now which is not easy because there is a switch of culture for the Institute too but as a researcher I'm pretty good and very keen to go deep into open source. So what about the data types we want to display? We have kind of different GIS data types we want to have in this 3D environment. First is oriented images and I'll talk more about that just after. Then we have point clouds, you now know about point clouds with poetry and all. So we want to be able to visualize that later data. We want to be able to visualize as well extradit buildings so 2D data which we will transform into 3D and 3D mesh 3D buildings with textures as well as traditional GIS data web services with WMTS for the terrain for the area imagery and for other data layer. In the iTunes project we also have data samples so the IGN provided a subset of the data samples from the StereoParis vehicle. It correspond to one neighborhood of Paris. We have two quality data sets so high quality which is 600 megabytes and a low quality one which is 60 megabytes for like it's kind of a small neighborhood so you can imagine that the whole city of Paris is really big. This is not open data, this is CC by non-commercial, non-derivative 3.0 license but you can use it for research for just testing the software. It provides 3D texture mesh, the buildings, oriented images, LiDAR data and vector data for elevation and to the building footprint. So we have a data set we can play with and that's very important for our end users. Focus on oriented images. What's an oriented images? It's simply an image for which you know where it's been taken from and the orientation as well so you know the positions, location and orientation. And what we do with images is that we project the images dynamically in the browser onto 3D data and especially onto extruded buildings from 2D or from meshes you can have. It allows us to do free navigation and to have more precision into the visualization. If you look at the picture you can see that you have the 3D data and the building, the square you see is the front of the building. It's flat and we project the image onto the building. It's not a texture per se, it's a projection of the image. And in this case we even have a projection onto the point clouds as well which is something we are researching. So you got your 3D buildings, you got your image on the right and if you look at the projected image just from the shooter's point of view you don't see any difference with the original image. But when you move a bit on the side which you can see on the right below image there is some deformation from what you can see because you change the point of view compared to the shooter's point of view. And if you move any even further which is on the lower left side you can see that we have this image which is projected onto the 3D buildings. And what we do with one image is not immersive visualization but if you have multiple images you can as soon as the user moves you can load the new images which correspond best to where the user is located and you re-protect them to the buildings you know. And during that you have a complete immersive environment. So item version 2 that's step 4 that's where we actually stand. That's a full refactoring of the application. We keep the algorithm, we keep all the principles but we rewrite all the code. There is also a few new features. There is a globe because the globe is sexy, funny. So they wanted a globe. That's not the main point of the application but we do have it. We have a high level API as well so it's easier to incorporate the framework into your own application. We have web services support, WMTS, WMS, WFS which are planned. More documentation, more examples. 3D mesh support with KML or GLTF formats. And the one map is to release an alpha version in autumn this year and the 2.0 version late this year or beginning of next year. So I got some videos to show. So let's get started with this one. So this is the globe view of version 2.0. The video is a mix of version 1 and version 2 to show the features. This is version 2 and this is the globe. So this is a classic global visualization tool. There is a timeline animation so time is taken into account. We have some fancy features with the sun rising and sunset. Of course you have elevation with very precise DTM. You can zoom in and automatic loading of the data with the Tiling scheme which is now classic with global application. So you can see the sunsets of the other formats. You can do animation of data as well. So this is an example with satellite infrared imagery which are animated onto the globe. So that's more classical. Since we use 3.js which is a very versatile framework, you can do a lot of things very easily like this animation of flooding or this kind of thing. It's only a 3D plane which goes up and down. This is a sand river in Paris and there has been a lot of flooding recently. So this is kind of a simulation of flood. Well, it's much, much exaggerated. It wasn't as high as you can see on the video. So here you can see that you have some 3D buildings too and you can have lots of different kind of data. And this is the immersive view so you can go from the globe down to the 3D data and you see the 2D extruded data and then you go to the street level and that's where you enter the immersive view. So this is 3D data. Well this is 2D data extruded and we project the imagery directly onto 3D data. So you don't see the 3D object per se but it's just like you were in the environment. So we support point clouds as well. So here again, without the point cloud service, it loads dynamically from the closest point where you are and you can enter again the immersive view and steal out the point clouds while keeping the images loading and projecting on the 3D data. There are a few tools so that's version one. The tools are not ported yet to version two but we'll come at some points and you have tools to do measurements. We work in projected space here so you have your local projection so that the measurements are done directly into the right reference system. You don't have any reprojections so you exactly have the measurement which has been done by the sensor which is the most precision you can have. And you can do point positioning, you can do distance computation and you can do a few things on top of your point clouds and the thing is the measurements are done on the point clouds not on the images. So it's dynamically loading and I think that's about all for this video. Oh no, that's version two as well. So here you can see that we project the images onto the 2D mesh with the 3D extruded building and onto the point cloud. So you have a very high definition for the color of the point cloud which changes when you change location because you're going to load the next projected images. So that's a try for mine somewhere you have mines and you can have some tools to digitize things as well. You can load vector data here we can see in more or less red vector data which is on the streets and you can incorporate 3D models that's a collider model KMZ because 3.js supports a lot of different things and it's very easy to incorporate new features into items. This is an example of incorporating a video as a texture and you can even play the video is just like as if you were loading a website on actually inside the 3D environment so you can do virtual reality. And this is very easy to implement because of 3.js that makes it work and it's pretty impressive. Yeah, so 3D buildings with high quality you can load shapefiles here for trajectories and that's all for this video. I got another one which show a bit of different environments. Okay, this is the video from the workshop we did this week at Pros4G. This is items 2 with a few specific improvements we have done. If it loads, doesn't matter. Okay, now you won't have, oh yeah, sorry. Get it back. So this is items 2, the new version and we have integrated pottery inside items 2. So thanks to pottery we can display point clouds very easily and we can use all the features of pottery. It's still a work in progress since we have a few things to improve in pottery and in items to do that. So that's the city of Montreal and the buildings are served with 3D server and point clouds are also served by point cloud server. Okay, quickly goes to the rest. So what for the future of items? We want new features so pottery integration is almost done. We have a branch doing that. We want projected images on point cloud that we have a proof of concept working, I showed it. We want to improve our build system and continuous integration for better quality. We want a user interface. We would like to integrate external API like here or like Mapillary maybe. We lack some information on the Mapillary API but could come. We want vector type support. We want 3D service connection with 3D type support ground and then try and support at some point. That's what we are looking at in the future. As for the server side and I'm almost done, we want to do some streaming from the server side to items. There is this project leads which we are working on to build up some new 3D web services for 3D object for measures for point clouds, for oriented images, panoramic images and 3D type support. So that's a work in progress and we still have a lot of work to do. So the plan for the future is to release version 2, improve the code quality. That's what we are doing right now to integrate more contributors. So you are welcome to talk to us and get into the project. Have a better collaboration as well. We have a first industrial project which are coming for items integration into a specific application. Of course, as an open source project, we open to funding and contribution. We do a lot of communication so that we can do all that. So thanks, thank you. Merci. And if you have any questions, that's my time. Thank you very much, Vincent. So we'll have time for a few questions. Yes. Hi, thank you. Just one question. Bandwidth is usually a major concern for 3D viewer. What's about for items? So yeah, it's pretty complicated because there are a lot of different aspects to take into account. We are improving at the same time the client side and the server side because you cannot have one without the other. So we try to optimize the formats we are using. It depends on the kind of data you are talking about. If you are talking about point clouds, there is a lot of work going on with entry and with LAC compression. So we try to use that as much as possible. And actually for point cloud, we benefit from the work which is done on POTRI. As for images, we have a level of detail management which is planned to load some lower resolution images before high resolution. And then yeah, it depends on the kind of data you want. And you can prepare your data according to the bandwidth you have. Or you can have a server which allows you to integrate level of detail. But it's very specific to the kind of data you have. Another question? Okay. Let's see. I'm a city who has a lot of collada files with the 3D and leader data also. Can you describe briefly what is workflow or what is the pipeline to get this data to like working web portal, for example? Well, as for now, complicated. I don't know about the visualization part. For the data set we have, for example, it's a flat organization. I mean, you can just generate the data with a specific format. There are some tools for that. There are still some manual manipulation you have to do. But it probably won't scale that well with that format. So you will need at some point to either have an improved format for large scale data, be it 3D meshes or point clouds. So you can use the n-train format for point clouds. Or you can use the POTRI1 for point clouds. For 3D meshes, we don't have yet any specification for that. Maybe 3D ties will be the solution. But we have no, the tools are not ready yet to have a good workflow for that. Otherwise, you can use a server for your data, which will be able to handle a large amount of data, but you have to integrate your data into the server. For now, what we do is mainly having not collada, but CTGML data as an input. So you can take your collada, make it CTGML files, and then we have import tools for the CTGML you put in the database. And then you have a server serving the 3D data from the database to iTunes. That's a current workflow for 3D data, 3D meshes. For images, you can just store that on the file server. That's all right. Other question? You had a question? Okay. Yeah, workflow is something we have to still work on. Currently, we are focused on the visualization part, but being able to bring the data to the client is still an open aspect and important one. One last question, maybe? What I was going to ask, it's a similar question, and it's almost the same question that I asked. Daniel, I have checked our data, so it's basically a text file with a bunch of 3D coordinates coming from, recorded from LIDAR sensors, which were mounted on the trains. I'm pretty interested to make it into some kind of visualization that we basically can pitch the 3D to them, to the management. I'm very interested in the workflow, how to do that. Do I really need a PhD in the field to get it started? Since we integrate Po3, the answer will be exactly the same as Daniel. The preparation is exactly the same. You just use the Po3 converter for now. It will take your last file, generate a bunch of files in a specific structure on the Pi system, and then you can use this as a source for Po3. Since Po3 is integrated into iTunes, you just do that. That's kind of easy. The Po3 converter uses it, just running common line and saying, this is my last file, generate me everything. Thank you very much, Vincent. Thank you.
We present iTowns, a web framework developed in Javascript / WebGL for 3D geospatial data visualization, with capabilities for precise measurement directly in the browser. The first use case of iTowns is Street-view data type visualization : immersive images, but also terrestrial LIDAR Point Cloud data. But iTowns now supports much more data types : Oriented images Panoramic images Point Clouds 3D textured models WFS vector data iTowns OpenSource is the descendant of the initial iTowns software developed at MATIS research laboratory of the French National Mapping Agency. iTowns OpenSource version 1.0 has been released in February 2016. The framework allows to : - Visualize projected images on a mesh ( cube, 3D model) - Visualize panoramic images - Display depth panoramic images - Display extruded building ( from WFS, other sources ) - Navigate in 3D (click & go) - Display Point Clouds - Visualize textured 3D models ( B3D, 3DS) - Use a simple API We detail iTowns features with videos. The data showcased was acquired by IGN's Stereopolis car. Aside from presenting the software, its present state and the future 2.0 version, we also explain the project history, which is an interesting case of technology transfer from research to industry.
10.5446/20340 (DOI)
Siellä on suureta ja tietysti caption peruuntaamme. Tai täysin topping看...? Olen erittäin tunn dollsista. J about it progressive cam. Keit helping equipment. We should do something you are finding. Tonight you will hear, you will see this pictures. Ability over the recognizable words. You can use that microphone. Hi. Hello, I'm Vincent Piccévé. I'll be talking about generally how to use open source software to manage water. Water in a general sense, so I speak about a lot of different solutions, according to different kind of applications for water management as well. When we talk about water management, there can be a lot of different things. So first water hydrology, groundwater hydrology, and your bined hydrology as well. So I'll show a few examples of these different kind of applications. I'll talk about a lot of different software, processing, Posh.DIS, QGAP, QWatt, ROM, QGis versioning, QGis upon head, QGis Swim, and freeWatt. So I'll be very fast, but that's about to give you a panel of what's existing and what you can do with them. Most of them are based on two products which are QGis and Poshis. And that's an illustration on how you can use QGis for a lot of multiple things. Actually QGis is more than GIS, it's actually a platform, a development platform, so you can adapt it to whatever needs you have. You have QGis core, which is a core of the application, and then you can plug in, you can have plug-ins on top of QGis, which allow you to use different modules. And you have this processing toolbox, which is the toolbox for QGis, which is with JOO algorithm, where you can find a lot of different processing features coming from different open source modules as well. And then you also have the connection to special databases, and in the special databases you can do as well processing analysis and water management too. So I speak about these different approaches. So first of all in QGis you have a lot of different modules, and concerning water, we can cite a few of them like Goodall can already do some DM analysis and mainly Rasta analysis, and sometimes you want to do that for water management. Saga and Grass and Tau Dem are also modules which allow you to have JOO processing algorithm dedicated to water, and all of that is available through the QGis processing toolbox. So that's the toolbox you have when you open processing, and then you can choose whichever algorithm you want, and you can for example find in Grass a lot of things related to water for flow calculation, groundwater flow, hydrological models, sediment, stream modules, watershed flooding areas. There are a lot of different modules because the Grass people are very prolific and they develop quite a lot of things very fast. So if you want to do Grass processing, you have the wiki pages which details kind of the algorithm related to hydrological sciences, so that's a very good starting point, and it's all integrated into processing. Saga as well is another module used for water management, and you have other algorithms like Upslot RR or a few different things. Tau Dem is yet another module, it's for terrain analysis using digital innovation models. So the principle is using DEM to do hydrology, and it's also integrated in processing, and you have a lot of different algorithms available. What's cool with QGis processing is that you can take all of these algorithms and combine them together to create your own workflow, given some inputs, for example, DEM here. You can go to a watershed basin analysis and then to do vectorizing to get a watershed layer, combining it with other algorithms on the way. So you have a design for GUI to design your algorithm using all the available modules. So this is particular and it's still evolving. You get forms automatically generated where you can choose the sources and the outputs and the parameters for the algorithm. So this is a watershed computation example. So after us for hydrology mainly, you can do network analysis as well with open source tools. One of the tools you can use is POTGIS, so POTGIS is a spatial database, and you inside POTGIS you mainly deal with vector data, and you can deal with graph data. If you have a topology, which means you can connect nodes and arcs, you will be able to use this topology for network analysis. So either you can use POTGIS topology, which is a specific extension to POTGIS, or you can use your own custom topology, and then you can use features from POTGIS SQL, which allow to do recursive querying of the data to do vector processing. This can be used for data quality management, for example validity or topology checking, topology construction, verifying your constraint on your data, to do some topological, entometric, equivalency, semi-admitted addition. A lot of different things can be done in the database. You can also do a specific network analysis algorithm with recursive queries, like pump activation, zone isolation, multiple networks, history of the data, a lot of things can be done. This is an example of doing work on the hydrological network. So we got rivers on the full basin, and we have nodes and arcs connected together in a database, so that the table from the source and the target is describing the arc, a specific cost, and within the database you can use, for example, PGA routing, which gives you directly from POTGIS results as a set of arcs between a destination point and a source point. So it gives you all the paths downriver. You can also do things like finding upstream. So I got a specific point in my network, and I want to find all the elements which are upstream from this point. You can do that with kind of advanced POTGIS queries. This is mainly possible SQL. That's what we call recursive CTE, sorry for my French, in which you select the first element, and then you're going to do recursively a search on your network following the connections you have on your topology. And then you get the results, and from the results you have a geometry, and you can have this upstream representation in QGIS. So the query looks complicated, but actually when you do network analysis in POTGIS, it's kind of all the time the same queries. So it's pretty convenient to do, and very fast. So another module is QGEP for Westwater, when we talk about networks. QGEP is an extension to QGIS, a set of extensions to QGIS for Westwater data management. It's mainly developed by OpenJS, but also by the QGIS project in general, and some other people. So you have a QGIS plugin, and it's based on POTGIS as well. You have a data model which is provided, which corresponds to the Swiss standard from VAC. And it allows digitizing profile creation, quality control. You have the symbology which are included. You can do exports, and it's pretty full features. It's still under development, and it's a good thing to note that QGEP has contributed a lot to QGIS score, because they wanted to develop this specific application for Westwater, that some needs which were generic enough to be implemented as QGIS features, and be reused in other applications. So we try to separate what's specific to an application, to what's generic, and improve the core of the application of QGIS application. So that's pretty good. That's an overview of the software. So that's still QGIS. You have the project open with different layers, and you can see that we can see a profile of the Westwater network. And also screenshots with different layers. So QGEP is for Westwater, and we have kind of the same software for distribution water, which is QWAT. It's the same principle. It's a PogSES model, and QGIS extension, and QGIS project. It's originally an internal project from SCG, which is a syndicate for water management in Lausanne in Switzerland. And it's now used in production for the city of Lausanne at Siege, and in Valcher, in the Valcher country in Romania. You've got two door here who implemented that, and provides you the model QGIS configuration and some additional tools. We are funding for that project for 2016, and there are other production which are planned for this year. And we are aiming for more generosity and industrialization. So it's a project, of course, and you can contribute and you can use it. So it looks like this for part of it. So you can do some visualization. You have all of the network with different layers, which are available, and you can have specific forms according to the kind of object you are dealing with, being the bit pumps or the valves or whatever. And you can still use every single QGIS feature. You have a specific composer template, which are available. And you can do querying, you can do editing, and it's still evolving. For example, on the same principle, we have contributed for the next version of QGIS to a different node tool for editing, which will be much easier and much faster to use for data editing. So we plan to have a more generic product in order to shoot more use cases on the one we are currently doing. We plan on improving the performance, which is already pretty good, but we want to test the performance and also to stress test the program so that it's more robust. We have still some specific developments to do, packaging, communication, and a bit of project organization. The project is pretty new. Open source last year mainly, and that's also a project which is open to new partners and new opportunities. Next is what do I do if I'm on the field? So I want to do water management where my assets, where my pipes, where my pumps, etc. ROM is a solution. We implemented that in Valshea. It works on tablet PC, that's PC only, no 100. It's a very useful application which allows you to create a custom application based on QGIS. So it's still QGIS, but the interface is totally adapted to tablets. So you have ROM on your clients with this specific interface, and then you have your data which can be directly on the tablet, on the PC, you have on the field. And because you can be disconnected from the network at some time, but you still want to edit. So we have this QGIS versioning synchronization tool which allows you to synchronize your local data on the terrain with a data reference system in POSWES SQL parties, which is then accessed by QGIS. So this is a kind of architecture you can set up to do on the field work. So this is a ROM application. You see that it's very simple. And the nice thing with ROM is that you generate the application with menus. So you can define which form you want, which layers you want from your specific QGIS project. So we use QGIS versioning in that case to do synchronization between on the field and the reference database. That's a QGIS plugin which allows for history, offline work, and different scenarios. It makes globally data versioning. And maybe you have seen some other presentation. It's used in various projects for the ERs project, for example, and others. It's based on POSWES and Specialized. POSWES is a reference data repository and Specialized. It's on field embedded database. It works like subversion globally. I won't detail that much, but it allows for conflict management when multiple users edited the same feature at the same time. When they come back and they try to upload the data, they got a conflict management system when they can choose which feature is the right one. And here you see that we have this feature which has been modified twice. So I can choose which one I want or modify the attribute and then resolve the conflict and upload the database. Okay, so now that we have a good network with all of our problems and all for distribution, we have this QGIS epanet extension which we developed at Oslandia. Epanet is a simulation tool made by EPA, the government agency in the US, which is a public domain simulation tool for water distribution networks. And we integrated epanet into QGIS processing so that you can directly run simulation from within the GIS program from within QGIS. So you set up all of your layers in QGIS, you parameterize simulation with all the specific values you want to have for your values, your problems and all. And then you can directly run the simulation onto the inside QGIS. So this is same, this is from processing. You say I want this layer for junction, this layer for pipes. And you run the simulation and it outputs you as a result of the simulation directly inside QGIS. You can see the low pressure sections, you can see the weather world, the tanks. And you can overlay that with your data and see the result, for example, the levels of, if the tanks are full or not, and the evolution, temporary evolution of the level of water or the pressure in your pipes and the way your problems behave for a certain amount of time. So it's direct integration of simulation and GIS. And this is something which we are working a lot on currently, which is coupling the GIS part and the simulation part. We did that for wastewater as well. So QGIS swim is exactly the same for wastewater and epanet. It's a processing extension as well, but it's still in beta, so you can try it, but no guarantee it will work, but it should work. If you want to help develop, contribute, that's open. And we also have this freeWAT project, which is something which is much larger than just QGIS swim or QGIS epanet. FreeWAT is an H2020 project, and it aims at creating an integrated modelling environment. It's part of the UFAM work directive, and it's an improvement over some U projects, namely Mars or Quimet and others. And the objective is to have surface groundwater, transportation, hydrogeochemical, pollutant, GIS and 3D inside the same software, inside the same platform, so that you can do all the simulation, all the computation from within the same interface. So it's open source, of course. It's QGIS-based. There are some QGIS-Gui extension, which allows you to run the simulation to prepare your data to do analysis, and there are some processing extensions as well. So we are currently still developing that. The first pre-release version has been distributed to the partners, and it should be released at some point later this year or next year. The project runs until next year at least. So we have a lot of new modules for management and planning for terrain data analysis, for calibration and sensitivity, unsaturated zone transportation, lake interaction, culture-ratton needs. A lot of different simulation modules that you can use altogether to manage your data for hydrology. Oh, these are some results. It's typically a mudflow computation. So these free routes interact with mudflow, which is an hydrological simulation model. And you can directly run the simulation from within QGIS-SAM principles. You have all of your layers. You enter all of your parameters, and then you run the simulation, and you can see the results directly on a map or the temporal data, which is here. So what's still to do in the water area? Well, a lot, because water management is a very large topic. We have a few issues we want to tackle at some point. So data models is still something which is open. There is no standard for everything. So we need to work on data models, on data stream and web services as well. So maybe with QGIS-webserver, it could be an idea. Better simulation integration, so that the GUI for launching simulations could be better. We want to have a better sensor integration, as well as SCADA connection to directly drives the equipment on the field. Better visualization packaging, and of course it's an open source project. So building the community is something which is already always important, as well as development and funding materialization. So that's kind of a list of open items, and that'll be just in time for the finish of my talk. That's a lot of things, but it's very dynamic, and it gives you a panel of what's already doable with open source software for water management, so according to your specific needs, you may go deeper into one or another also too. So if you have any questions, that's right now. Applaus Questions, please. I'm just looking at integration with existing commercial tools, like Kisters, like a Quarist Informatics, which have quite extensive commercial current use. People using those tools with large hydrometric water quality databases, which have sensor-enablement and gauging rating-type capabilities. You haven't actually mentioned how this can possibly integrate with other systems that are already in place, rather than being a standalone tool that tries to do everything itself. Okay, so yes, it's doable. We didn't do it because we have priorities in developing open source software, so as much as we can, we use open source tools. But you can interact with proprietary software if they have a mean of interacting with them. If it's not a total black box, if you can export the data with a kind of a standard way or standard format, you should be able to interact with them. One of the problems usually is that there is not decoupling of this software between the graphical interface and the computation system. So sometimes you cannot run simulation without opening the software and going into the GUI and doing a lot of mouse clicks. But if you can have a separate process or separate software for doing just a simulation, in this case, you're able to integrate that into processing. And usually you have to be careful because QGIS license, if it's integrated into QGIS, QGIS license is GPL version 2, which makes it mandatory to publish your work as GPL version 2 as well. If you modify the code or if you link the code with another code. But in the case of processing, what we do is we just run another software. So there is no direct link, software link between both software and the GPL doesn't apply in that case. So yes, that's doable. Funding work problem. Another question? How do you deal with when you're doing the simulations? How do you deal with topology thing? Do you have a topological model of your features? Or do you just rely on the exact same point when connecting? So which part of the simulation do you take in? The simulation of your pipe network. So for APA-net for example? Sorry? So for APA-net or distribution network or whatever distribution network? So we have a topology, which is very simple topology. You have pipes which origin from one node to another one. And you know the identifier and we rely on that. And what we do is actually APA-net as its own data format and its own topology format. And in QGIS you can have your... Well, we have a table which reflects this APA-net format. But you could have a database with a different topology format. And then have like a view which would be an APA-net view on your original specific format and do the translation like this. That's actually what we do with QWAT. QWAT is a water management system. It's on data model, which is different from what APA-net requires. So we have database views on top of the QWAT model, which correspond to the APA-net model. And then we can run the simulation directly on to the QWAT model through these views. One more quick question if there is one. I could ask about the water ML standard. Do you support that? No. No, we don't support that for the time. And actually I don't know that much in that area. So maybe as my colleague, one of the QWAT developers or APA-net developers. As for now, there is no implementation that I know on top of QGIS for water ML management. Okay. Thank you once again. You're welcome.
This presentation details some OpenSource tools dedicated to water network management, be it for water distribution or wastewater networks. The qWAT project is a specific tool based on QGIS and PostGIS. it aims at managing water distribution networks. The data model is part of the project and covers most use cases for this kind of assets. The qWAT project is strongly linked to QGIS, and tries to contribute to the core of QGIS so as to mutualize developments and features among other QGIS-based applications. Similarly, the QGEP project is dedicated to wastewater networks. We also present a use case for an implementation of a wastewater information system in France, based on QGIS and PostGIS. Furthermore, we show how PostGIS-based projects allow to do network and graph analysis, so as to extract meaningful information for decision-taking and planning. QGIS-Epanet and QGIS-SWMM are two QGIS Processing extensions integrating simulation features on water distribution and wastewater networks. They let the user run simulations to analyze the network, dimensioning, and identify specific issues. These set of tools show that OpenSource GIS now tend to fulfill use cases for specific fields of application, and water management is among them.
10.5446/20338 (DOI)
Okay, so I think we'll start there with Calvin Metcalfe, that's going to talk about nobody cares about your datum or the Klein staterei of spatial references. So I'm very eager to hear what you have to say. Hi, so disclaimer, the original version of my talk was going to be like I'm going to do all these analogies between spatial reference systems and the state of the Holy Roman Empire back in the day. It turns out you cannot get a shapefile of the Holy Roman Empire for less than like 800 euros. I spent far too much time trying to find historical medieval German shapefiles and a lot of other countries you can get this data, but apparently when there isn't an actual country somewhere they tend to not have that kind of data. But yeah, a bit about me. I refer to them to call that geo. They're not really going to figure into this talk, but they paid for me to come out here, so I'm going to give them a shout out. We have a later talk tomorrow. You can join us if you want to learn about writing WMS servers in the cloud. Yeah, today we're going to talk about Proj for JS. I'm going to leave you hanging here while people find their seats. While we wait for people to sit down, I'll just give my company. If you are a small local government and have partial data that you would like to explain on the map, we can help you out. Go talk to Michael Turner over there. He will be happy to sell your community some fantastic software called Map Geo, which is also based on open source tools. But all right, Proj for JS. This is a library that I maintain. So we're going to get, there hasn't been a huge amount of work until I wrote this talk, and then instead of writing this talk, I did a bunch of pull requests and said procrastinating. But this is a big library. So we're going to show in the back of some of the slides as I keep on talking. The different projections we talk, they are supported in Proj for JS, just so you can get a feel for the scope of the library. All the projections have a short name that's used in the Proj strings that PostJS uses. And then we have the aliases that well-known text uses. So you can have, so for instance, Albers equal areas, also called Albers conic equal area. Note the creative use of underscores. There will also be some that use other ways to differentiate words. And then you have the just calling it Albers because I don't want to type as much, I guess. So my involvement in Proj for JS started when I complained about it on Twitter. It's really how a lot of my open source stuff starts. I've been writing a shapefile parser because I'm a masochist. And I wanted to be able to just drag and drop a shapefile on to a map, zip shapefile on to a map and have it just work. But people that use shapefiles tend to be people that have their own projection. So you can't just drag and drop something in Massachusetts state plain meters and have it just work because you'd have to project into Google Maps stuff. And Proj for JS wasn't really working. It turned out it was not actually being maintained by anybody these days. So I got volunteered. When I started, there was no read me. So no way to figure out what the API was. It had an ad hoc Python build system. It was all synchronous, but every time you used new EPSG code, it would actually go over and query spatialreference.org and try to download the well-known text file. And if anybody has used spatialreference.org, you'll know that that is also an un-maintained thing that I'm frankly shocked that it still works because I don't believe anybody actually actively maintains it these days. Yeah. So it had been separated out of a product called Map Builder, which some of you may remember that one about a decade ago. It was a early web mapping library that was built for like WMS-style stuff. It was sort of pulled out of that in its own thing. It was very much Java or C++ that was sort of not translated into JavaScript, but transliterated because it used sort of all of those Java and C++ object-oriented idioms. And it just did not look very much like JavaScript. And a lot of this sort of open source stuff, it doesn't really, it ends up having more in common with evolution than intelligent design. It's often like nobody intends to like, you get this like a sedimentation of like layers on top of layers to do different things. And it was sort of like, every time I did some code hacking, I just maybe want to do like some Haskell afterwards because it was just sort of like everything that was wrong about object-oriented programming, it was sort of like everything is globally mutating the state and nothing returns anything. And you could see that there's probably, there's like a series of rational choices that would lead up to there, but then you're just there and you're just like, I just don't know what's happening anywhere. Like, yeah, there was some, and there's a lot of it has been sort of just like separating it out and just trying to figure out like, what happens where and is this branch ever needed? Like, I actually was looking through and found we have all this references to grid shifts, but no way to actually give a grid shift file. So, because there's no grid shift format that works in the browser, so sort of like, wow, we can get rid of like half the datum transformed live because there's no way to get to these code paths. Much to Peter's sadness who, one of his goals is to get grid shifts working in Proj4js and I. But yeah, it's very hard to follow the control flow of anything inside there when I got, when I started and there's actually still a couple bugs we've had recently where if you project, if you do a projection several times in a row, you'll get different results because there's stuff ends up getting mutated on the actual like projection and yeah, that was fun to find out of the, you know, doing it again doesn't give the same thing. So, there's also a lot of projections that there are flags and terms. This one has a term called check that if it's set causes the stuff to get flipped, but you can't define it anywhere. Like, there's no, I don't think at least, but basically everything from the, the well-known, everything from the definition of just copied onto an object, so it might be able to be defined somewhere. So, I don't know if it's important or if it's used or like, is there any way to actually flip that or sort of just like a legacy from the, from a previous like Java or C++ version that does support it. So, I'm gonna have to check it out. I don't know. You can. In Proj4js? Set the check flag. Yeah, well that's the things I don't know if that, if it's just a legacy from that or if it's still, still a thing you can do in this version. And of course, like all good algorithmic stuff, all of the single, all of the actual projection code is full of single letter variables that are usually similar, but sometimes they have slightly different idioms. Like, some of them will have like X and Y, some of them will have a, you know, W and X and some of them will be Latin long, some will only, you know, Phi and Lambda. It's just, at least it's, at least it's better than, some of the D3 projection code actually uses Unicode Lambda characters in it. That is a fun time. And yeah, and it's not, and since everything is on that sort of one object, it's very hard to tell like what's actually important and what was sort of copied over. So, anytime something breaks and you're just like, maybe let's put some other stuff on here and see if that works. Maybe I actually need to call this parameter something else and then it will work. Is it latitude of origin? Is it latitude zero? It's a good guess. And there's also datum shifts which are very intimately intertwined with the projection stuff, hence why there is a lat long projection which simply returns what you gave it, because there's no way to actually do a datum shift without doing a projection. So, that's some fun code paths there. So, some of the issues I alluded to earlier, you know, the way projections are communicated around is quite a bit of the problem. Yeah, this is Google Maps. So, it is in my opinion the well-intext represented representation of coordinate reference systems is empirically and objectively the worst format ever. Like I just low with it. It has its own bespoke serialization format. It's not S expressions. It's not XML. It's not anything that anybody else uses. It is its own like quasi... It's not even like the rest of well-known texts. It's that you have like property square brackets, one or more things which can include other properties with square brackets. And it's sort of like, why? My parser that I ended up writing was just simply flip the first thing and the first term in the bracket around. So, it just becomes an S expression like Lisp. And that's like super easy to parse because that's just an array in JavaScript. It's just sort of like... So, it has... There's no like a lot of like specs and stuff. They'll see how people use it and they'll update stuff to, you know, to take that into account. This is not the case with that. Fun fact, if the... Every single projection I've ever seen uses the word keyword projection to define what the projection is. As I read the spec, it turns out that is an alias for the method name. So, technically, you're supposed to be using method, projection is an alias, and I'm pretty sure that would fail in most every single like library if you were to actually use the method name because nobody does that because that's just... So, yeah, there's no actual feedback of like, we're gonna see how people use it and do the spec in that there. No, no, they're gonna have a perfect pure spec that like really just captures the epistemologically perfect projection information. We don't need to like dirty it up with reality. So, it has a lot of time spent on axis order, which apparently claiming axis order is not important is like fighting words because I mentioned that earlier and somebody from GDAL is like, no, that we need that. Like that's really important. I'm like, it doesn't matter from my end because this is X and Y. So, they don't have an order to me, but it spends a larger time talking about axis order, the ability to inherit from a different projection and axis direction, then it talks about what the actual projections are it has. But somehow in the small table that discusses that, it manages to define multiple projections to have the same alias. So, there's several versions of Lambic, Connick and Formal, and it suggests you refer to them all as LCC because why would you want to have, you know, one-to-one mappings between names and ideas. So, there are some multiple conflicting actual implementations. Projections that you get from Esri software are just like ones you get from Open Source ones except everything has a D underscore in front of it. Does anybody know why? But I don't know why I would have a D. I'm literally asking, like, if somebody here might know, that's something that's like bugging me. Like, why? Like, why does that have to be D underscore, you know? Do we just, I love GIS. So, this all got written when I was working for the state of Massachusetts and your tax dollars are hard at work because my boss used to go on vacation for months on end and just forget to give me a thing to do. And I was in a union so it wasn't like a problem but it was just sort of like, what you're going to do? So, with the help of Andre Hakevar from the OTC, we ended up getting like a working version that will, you can give it a project file and it will likely correctly project your stuff. Like, most of them will. Probably. Like, it's not necessarily tested on all projections because we're not done yet. Like, this is the beginning of the essence. So, this projection, I think that's actually an alias for OMARC. Like, when you go to EPSG.io, it gives you a well-known text with Oblique Mercator, whatever that one's called. But it gives you SOAMERC for the product code. But the documentation is all in French on that one. Well, the documentation that's linked to the projection file is all in French, including the comments. So, I'm not sure but we have some Swiss people here so they, I was very excited when Peter was like, I have a demo with the Swiss run. Like, I have some questions about that one later. So, I mean, the next step that I would do to try to like improve it would be to pull out that well-known text parser and just have that generate something intermediate that you can like test against that you can be like, okay, this as every well-known text is actually generating the same thing as this QGIS one or this project code and so that you could actually test that stuff in the meantime and not have the, so it's just weirdness there. Like, in theory, that would be great to have something we could standardize to like a same GOO JSON of projection stuff. But then we'd of course have, that would be like the fifth competing standard that would sort of have its own issues. Open source is hard. But yeah, I'm gonna get on that soon as I have some free time in my life. So, or like you, the user of an obscure projection could help us out instead of being shocked. Shocked, I say, that you know, check, COVAC doesn't work at all projections. COVAC does work. Like, we got an issue. We ended up fixing that one. But yeah, a lot of the issues with COVAC ends up being encoding in shapefile. That's a character encoding in shapefile. That's a talk for a different time. Possibly they had not say for work geocoding for this other. But yeah, that's tricky. The other thing that's really lacking in projection information is test vectors. Like, there's no way to be like, oh, am I projecting this correctly? Because currently, like, sometimes there's a conflict between libraries. And it's like when you're wearing two watches, you know, we've had issues where like, well, Proj4j gives this one. Okay, well, CS2CS agrees with us. So, I think you're outvoted. Probably. So yeah, it's the joke of like, if you have two watches, you never know what time it is. But if you have three, you can figure that out. So yeah, so like test vectors would be selling, because there's, the GMO file is like, there are thousands of things in there. And it's pretty hard. So just having some data to be able to be like, nope, that is, you're doing the, your conversion using Clark's, you know, using this Caribbean projection that uses Clark's links instead of meters, you know, you're not quite doing that right. Because, it's usually your guess is as good as mine. Any questions? Okay, thank you. We have a question there. If I were to have examples of some particular projections that you could use as test cases, where would I put them? If you're to just, if you have them in like, like some sort of data format, and just like, posting it on GitHub as an issue or like, just towards it, like, how big are we talking, I guess? Well, that's the question. I mean, do you have a preferred flow much? So we have, we, we probably do like, we probably do JSON. We have a bunch of like, test vectors that we basically hand did ourselves of like, okay, well, if I go to like, Proj4, like, and use this, you know, projection, we get this turns into that one. So we'll just put that as a test, it's probably right, you know, we'll get try to be conformant with that. So, yeah, JSON format would probably be best. But if we start getting a lot of them, we might try to like, streamline it to make it easier. Yeah. Very open an issue and we can work through it on the GitHub. Any more questions for Calvin? No. Okay. So thank you for your presentation.
What is the difference between people that make maps and GIS people: GIS people waste much of their time dealing with spatial reference systems while people making maps just avoid them like the plague and instead focus on the projections they need to use to represent their data with. Most discussions on the topics of projections and spatial reference systems is mainly on the large number of small spatial reference systems each used by a limited number of groups. Work for the state of Massachusetts, use EPSG 2805, work for the Boston police? then you use EPSG 2249. This talk will focus on the gap between how projections work in theory vs how people constantly waste their time dealing with projections. Most of the mental energy spent on projections and spatial reference systems is spent on incompatible local systems used for storage of data, which are also known as internal details nobody should care about with disproportionate time spent converting between datums whose differences are smaller then the precision of the data.
10.5446/20335 (DOI)
Welcome to the second session. This will be QGIS substance from the next three presentations. Here in the Berlin room we will now present Matthias Kuhn from OpenGIS and he will tell you how to bring your QGIS projects out into the field. Welcome. Thank you, Lene. It's almost two o'clock. Fifteen seconds left. I think we go for functionality. And I'll start, yeah, maybe three seconds early. My name is Matthias Kuhn. I am from Switzerland, as you may hear. I am QGIS core developer mostly, actually. I've been in the project for a couple of years, like five years. I've been working all through the stack from user interface down to data provider stuff. So I know the QGIS code base pretty well. I've studied geography. I've got a master in GIS at the University of Zurich. And before I've done that, I have done a certificate of proficiency in development. I've been in development for the last 15 years, I think, or over 15 years. And also full stack everything from server application to some embedded devices I've been doing there. Right now I work for a company called OpenJS. OpenGIS. We figured out how to do this. I do a lot of development. We do consultancy, we do training, and I would say to some degree a geek. And I went in my leisure time. Just love to go out to the mountains skiing and climbing. So to get started, a couple of years ago, my colleague, who unfortunately cannot be here today, created QGIS on Android. This was basically a port of the full blown desktop GIS to Android. And it was running and still is running on some devices, unfortunately not on all of them. And that's more or less what it looks. Actually the screen that you see here was taken on a desktop, but there's really not much difference as when you run it on a tablet. If you look at this interface, you can see that there are a lot of possibilities there, but you can also see that a lot of things which are offered there are not really something you would use out there on the field when you do work. And you can also probably realize that the buttons which are there are just way too tiny to touch with your fingers. And if you make them bigger, they will just take more screen space, so you have even less space for your work. So all in all, what QGIS on Android offered was a very powerful rendering engine because QGIS in general has a very powerful rendering engine. All the styles you can do with it, they are amazing. You have a huge possibility in configuration possibilities with QGIS. You can like fine tune your project wherever you want. You have loads of tiny little things which you can optimize for your use case. You can integrate tons of different data providers. You can integrate all your different databases you can access. You can integrate all kinds of files with geo information in there. And this is another very good thing about QGIS. And as I said, the complex user interface is a bit of a show killer if it's on QGIS on Android. So I don't know how long ago it was, maybe one year or two years. We started this project called Q Field. And we decided to just keep everything the way it is, keep the powerful rendering engine. It's still a full-blown QGIS under the hood. We keep all the configuration options. We keep the data provider possibilities. But we add just a new user interface because that's the main thing that's really annoying when you're working with your project out there. And we decided to just go on with QGIS first approach. That means whenever we add a new functionality, we first check if it makes sense to put it directly into QGIS and also offer it on the device and not like implement things on Q Field which are not available in QGIS as long as it makes sense to keep it this way. So just as an example for the rendering engine, this is the data provider to some degree as well. This is WMS, which is rendered behind a 2.5D renderer, a classified one. And you can just integrate all of this in the project and it will render the same on the field. This also shows that the 2.5D renderer is built around the QGIS expressions, this SQL similar syntax. And these are obviously working because it wouldn't look this way if expressions wouldn't have been working there. One thing we did to optimize the user interface was to give a lot of, like, add context to the user interface. So often when you do something, you actually know what you're doing at the moment so you don't have to explain everything and we just show what in this context really makes sense. So let's say, for example, when you do data entry, what you get is a list of layers. We have the test line layer right now activated to digitize something on. And we have just one single button here to start digitizing this new line. Now maybe before I come to the contextualized buttons, I'll also give a short introduction to this little crosshair here in the center, which is the main mean to find points. Because with QGIS for Android, you had the good opportunity to touch somewhere where you wanted a point. You would touch on the R here and you would get a point somewhere around there. Originally, we bought extra tablets with pens because you could be a little bit more precise, but you're just never as precise on a tablet like you are with a mouse. So instead of going for this approach, we said we will just make a crosshair in the center and this will be used to localize points. So you navigate the map to where you actually want it until the crosshair is there. And then you can click this button here. And as soon as you click it, there will be some more buttons, which only make sense now. And you can move the map away. The line will start to draw. You can remove the last digitized point, delete the feature in total, save the current thing or add yet another button, another vertex, sorry. This is how data entry digitized geometry digitized in the right. It's pretty straightforward, I think. The next thing that QGIS offers, which you also have very helpful if you want to get some data from outside, is attributes. So when you want to fill attributes, you don't want to just have some text, free text form entries, but you should have some nice widgets which will help you to do this. QGIS allows for configuring a wide range of widgets. You can see some here. There's a text edit simple one, a range box where you have numbers only, check boxes, external resources, which are for pictures and things like this. And I've seen a lot of other projects for which you have to configure the things extra. So you have your QGIS projects set up. You have configured all this for your desktop work, but then you still have to use a plugin and configure everything extra to get it to run on this, I don't know, web client or on this mobile device or whatever. Without QGIS first, let's just use whatever information we have in the project, reuse it and just rebuild the user interface based on this. That's what it looks like. Does it work? Yeah, okay. Maybe I have to. I don't know how can I stop? Can I post the video? Not really? Not really. Okay. I got to talk fast. So this is, when you start here, you have a GPS location marker where we are. We have here the point where we want to ditch it as the point. We centered on the GPS point and added a new feature. Here is a simple text entry with the keyboard you know. There is a checkbox which is configured for one of these widgets. There is a camera support. If you have an external resource, which it configured, you will get this camera support and the picture just shows up. And you have a value map support for adding like values and things like this. And once we have entered all this data, we can just click on save and that's a new point. We have just digitized with all this data. That's the link in case the video doesn't work out of the presentation. Or in case you want to watch it again when you are at home. This basically integrates all these widgets and makes your field work hopefully very comfortable. The workflow in general now, a bit broader than just out in the field, what we have been focusing on now is you first prepare your project on the QGIS desktop. Then you move it onto a mobile device. Then you do the field work collection, etc. And then you synchronize it back to QGIS. So that's what normally happens. Of course, you can also directly work from mobile device on a database if you are always connected. But I know that a lot of people are not always connected when doing field work. So that's where we have an offline editing support, which I'll think I'll come back to later. But the important thing is this. We told ourselves we don't offer all these configuration options, which QGIS desktop has on the tablet. You should, before you start a campaign, when you plan the campaign, some GIS specialist expert can prepare it somewhere in the office. He can sit down with a huge screen, with all the configuration options, with all the bells and whistles that QGIS offers. He can prepare it. And then when he ships it to the device, and when you are out there, you just have a few buttons to control what you actually want out there. Now this has turned out to be a quite complex thing, because one thing is, for example, that your project needs to be portable. That means that on the device, the files cannot just be on your C hard drive or wherever. They have to be relative paths that point to the files, et cetera. And for this, we have now developed a new plugin, which is called Q-Field Sync. It has just been released this week, which helps you to prepare your project. What this plugin does, it is available from the plugin store in QGIS. You can just install it, and it will make your project portable. That means it takes care of making paths relative. It takes care of making offline copy of data from your databases. And later on, we are not yet at that point, but we are working on it when you get back to synchronize the data back into your database when you are back in the office in the evening. You can check if you actually, you can choose if you want a certain layer to be used online directly or if you want it to be used as an offline copy. I'm running fast today. So the current state is that Q-Field opens QGIS project files. It uses QGIS data providers except for some formats like ECW, where it's quite complex with licensing terms to get it to run. But in general, we support most of the data providers that are available. And rendering engine is like fully there, whatever QGIS 2.16 supports should also be supported by Q-Field. The forms are almost there. We are working on improvements there to get more widgets to make it even easier to enter data. You can digitize points and lines. And you have GPS support, and I think it's simple to use, at least. You can make your own picture from this yourself. What is soon to come is polygon digitizing. We are working on that. This is the last bit of geometries that really needs to be done to be useful. We are working on snapping, so you can digitize your new layer snapped on other new features snapped to other features. Other form improvements, for example, those of you who know the drag and drop designer for designing forms in QGIS, when you have like 50 different attributes, you don't want to offer them only in a big list, but you probably want to have them on different tabs and grouped and nicely designed. So we are working on integrating this one. I thought that I'd have it ready for today, but unfortunately, last days I had to talk too much to other people here instead of working and hacking. Form attribute validation. This is something that is also available in QGIS desktop, where you can tell a certain attribute to be, for example, in a given range if you know that all height elevation within your research area is between 400 meters and 700 meters above sea level, and suddenly somebody writes 5,000. It does not allow you to confirm it and it gives you a hint like red star, hey, sorry, check that again, there is something wrong here. That's available in QGIS desktop since 2.16. Those of you who didn't try, that's really something you should check. This will also be available. Other things which are not yet covered, but are high on our priority list, are a legend. So to have some context on what the features look like and how they should be accessible. Hybrid editing. This is something that we are thinking about. It's probably going to be a bit complex that is instead of synchronizing in the evening when you come back to the office, you should be constantly thinking whenever the network is available, like pushing up your changes, getting your changes back and then continue working. So that could be a quite nice thing to have. More GPS status information and that directly in true attribute. So we are doing right now the groundwork for this. There is, for those of you who are very familiar with GPS, RTK status, for example, that means like sub-centimeter precision. So if you can save with a new feature, if it was taken with this precision, it will be useful information later on. That for example, integrating external sensors is a similar topic. Like I know you go out there and measure the carbon monoxide or whatever and that you could get that directly in true attributes. This is also a very useful thing for some of our customers which we are looking to ship soon hopefully, but it's not yet fixed. Any of you, if you have great ideas, please approach us. I'm here until Friday, so we can just come to me, come up to me and talk to me and tell me about your ideas and wishes. And so now I am already at the last slide. I don't know if some of you have been to the main room before and I've seen Stephen Feltman's talk. How to, you've heard the name, how to help. I mean how to integrate, how to get us, help us to get things done. The first thing of all is get it on Google Play, install it. It's free. It's open source. Use it and when you've used it, tell the world about it. Tell the others that it's great. Please rate it five stars. Some people had trouble to set it up and rate it one star and that looks a bit crappy. So if every one of you gives us five stars, we will be happy. Continue development. There is a documentation online. The link is Qfield2Dorg. It is, I always write documentation whenever I develop a new feature but I'm sure there is a lot of additional information which could be very useful. So if you solve a problem, it would be great if you could just edit the documentation and add some information to it. At the bottom there is always a link. Please fix the documentation here which you can click and improve it. It's quite easy. There is also a translation project where you can help to translate the app which is also very much appreciated for native speakers of whatever language. Something which would be great is if you could write a case study because right now we are the technical guys. We try to make a good app but a lot of people who could potentially use it, they would be very interested to see how forestry people, biologists or whoever, how they use it and like talk in their language. Don't talk in our technical language but in the language of people who really need to get field work done. And last but not least, we do work. We need to make a living. Sponsor a new feature, contact us and say what you need. If you have some money, we'll be glad to improve the software the app for you. And yeah, so that's it mainly. I think I'm almost at 20 minutes but I've got a bit left. Thank you for your attention. Thank you for the presentation. Is there anyone in the audience who wants to ask a question or two or five? We have five minutes. Thank you very much for a beautiful work you're doing. In my office in Nigeria, we are using OpenMap kits. I want to find out if you cover that with that app, look at it, see what you can bring on board. I like this because we have our QG's that we're using in our office. I want to find out the accuracy, GPS accuracy, like the way it is on HODK. Are you allowed, can you set the accuracy limit, maybe to a limit of plus or minus, five meters, so that you can't pick the points on the T's within that accuracy limit? Is it possible, is this something that I can work on if it's not possible now? If I understood you correctly, you're asking for saving the accuracy of each point in an attribute? No, no, no, no. I want to be sure that somebody I send to the feed is not bringing me location with a high-pressure of about plus or minus 50 meters. I want to make sure that it's restricted, probably plus ODK presently as that feature. If it's not possible now, probably not. It's not possible now, but if you allow me to go back two slides here or three slides, I'm not sure, there is an attribute validation, which means based on if an attribute is OK or not, you can enable or disable the form confirmation, plus you can also highlight the field, which is wrong. In the future, it should be possible to put the GPS accuracy into a field. In the combination of these two options, I'm pretty sure that whenever we get there, this is one of the things that will be possible. Another question? Can you use WFS as source, as input, and after that, editable? Yes. So, all right. Let's be with WST then. To edit, yes, to edit, yes. I did not check WFST, but as far as I know, it is included. You would have to test. If you do, please let us know. There is a nice table on the documentation page, which says what's supported. If you fill it out, I'll be very happy. What we do now is we're working on WFS integration for a client, where it is about getting a subset of all features, like in this research area in the office before, put it offline, work with it, and then put it back into a Postgres database. I think, yes. So, that's our experience. Is there any option to, or any thought about maybe a scripting option? I know in QGIS, sometimes you want to automate a process. You write a quick script and then just put it in and just enter some data in your script. Yes. Is there any option? There is not right now. This whole system is built around QML and JavaScript, which is in QGIS, the new scripting bindings they normally offer for these kind of tasks. We think that we should, in the future, integrate some kind of possibility to script there. Python is unfortunately not available right now. So I'm sure one day it will be possible, not at the moment. And on the other hand, I think a lot of scripts which had to be done so far were for user interface for forms, where it like disables certain fields based on certain conditions and so on. And I think with the expression engine and all the new stuff that comes in there, we can reduce the amount of required scripts also quite a bit. Anyone else, any questions? Then I think we... Is it possible to use this Q-feed on other platforms than Android? We only do it on Android right now. It has this on Linux, so it works on the desktop. On Windows, I'm sure it should not be very hard to do. We didn't do it, but it should be quite straightforward to just ship it there, compile it and ship it. On iOS, I think there is some licensing problems, some open source libraries which are not compatible in with the way they are shipped throughout the app store or so. Maybe there would be a solution to that, so if somebody is interested, but it will not be very easy, it will probably involve a lawyer. What was your native editing format in the template? You can use whatever you want. We recommend using spatial ID or geo package, but basically it understands shapefiles just as well. Okay, we know we have to stop here. I know you will be available here. I am. Remember projects like this, you are awesome ideas as you said here. The last bullet was sponsorship, new features. Thank you for this presentation. Thank you, Lena. APPLAUSE
Ubiquity: The ubiquity of mobile devices has seen a huge increase in the last years. With more than 2 billion mobile devices shipped 2015 and a growing market, such devices also become more important at the workplace. The geo stack: Thanks to its multi-platform nature and its broad feature set QGIS is one of the most widespread open source GIS applications and does a good job on the desktop. A native mobile touch interface for field based data review and acquisition is the missing bit in the open source geo stack. Core requirements: From developing QGIS for Android we have identified the core requirements for mobile applications. More than that, we have identified what must be avoided: complexity, small UI elements and project definition work. Less is more: Thanks to pre-defined modi for tasks like data acquisition and data review users can focus on the task at hand. Clear user interface elements and adaption of tools for touch input while offering great precision for coordinate recording with an intuitive interaction design make it a pleasure to use and an efficient tool. Synchronisation: To bring the data back into your infrastructure from the device we have developed a new offline synchronisation tool to allow seamless data exchange between the device and the existing geo infrastructure.
10.5446/20333 (DOI)
So without further ado, I present to you the presentation about GeoServer and OpenLayers Free. Hi. We'll be talking about vector tiles generated by GeoServer and rendered using OpenLayers. Others will disagree, but for me the most interesting thing to happen in GeoServer in the last little while is vector tiles. They're really easy to use and they open up a lot of possibilities. I'm really excited to talk to you about them. Here's our cast of characters. Gabriel, he wrote the original GeoServer vector tiles components. On David, I'm currently moving vector tiles from a GeoServer community module to a more official extension and on the module maintainer. Andreas, he wrote the OpenLayers components. He'll be talking about OpenLayers later on. Let's get started. I want to quickly introduce vector tiles in the relation to what's familiar, tile maps. Here's the standard web map using OSM data. We've all seen them. If you pan around, you get little independence squares called tiles, get pulled in from the server. These maps are great. The server only has to draw each tile once. It's cash and it's very quick for everyone using the map. They make for fabulously interactive maps. Here's an OSM tile. A lot of time and effort went into the photography for this. It looks great. It's clear and beautiful. It's super easy to put together a map using these tiles. However, there's not much you can do with them other than draw them. Not getting any access to underlying future line work. I don't want to be in control of how the map looks. It's set in stone. What if we do the hard work of data prep, what's available at each zoom level, get ready to draw but don't actually draw it. Instead, we package real features into squares just waiting to be drawn. We call these data tiles, without any styling information, vector tiles. We take the vector tiles, put them in a grid and cache them. We then deliver them in exactly the same way as an image tile map. Except instead of just blitting images on the screen, the client decides what's drawn and how the map looks. It does the rendering. In result, there's a personalized interactive vector tiles web map. I want to emphasize that image tile maps and vector tile maps are very similar. Except image tiles maps have pre-rendered on the server tiles and vector tiles have rendered on the client tiles. Other than that, the tiling system is pretty much exactly the same. Vector tiles are empowering and efficient but a little bit harder to work with. This is a thousand different stories to tell with your map data. Vector tiles empower the client instead of the server to decide which story is told because the client decides on what the map looks like. You're not constrained to the server's cartography decisions. Vector tiles are very efficient because you can tell all the thousand stories with the same tiles. You only need one copy of them on the server and you just render them differently on the client. They're also efficient because they can pack a lot of information to a little space. This is especially true for the high resolution displays that are really popular nowadays. You have to bring in a lot of pixels and that means a lot of extra image tiles. That's a ton of extra server bandwidth and storage. Drawing vectors which look good at any resolution instead of pre-generated images can be really efficient. It's a huge savings. For the last two points, vector tiles can be more difficult to work with especially if you're doing your own style because you need to understand both feature data and the tools. Luckily, Andreas has made the tools as easy to use as possible. I've introduced vector tiles and been continually talking mostly about the server side, how GeoServer creates vector tiles and how you can control their content. The takeaway of this is that vector tiles are easy to use with GeoServer and all the tools and techniques that you're familiar with, you get to reuse. Andreas will be talking about and demoing the client side open layers. He's got a really slick demo and I'm going to show you some of the cutting edge new functionality. It's super easy to get vector tiles from GeoServer and GeoWebCache. Instead of asking for an image like a PNG from GeoServer's WMS, you just ask for a vector tile's format. That's it. Instead of an image, you're going to get a vector tile back from GeoServer. Instead of drawing it into an image canvas, GeoServer draws it into a data file. We'll go into more details of that later. To make them available through GeoWebCache's tile caching service, just tick the box in the vector tiles format you're publishing to and it does the rest. There's nothing else to worry about. That's it. Five seconds and you're up and running. Before we go into details, I just want to put vector tiles in the context of the open OGC services. It's a little confusing because you make maps with them, so I think WMS, which is used to make maps. There's also tiles in the title, something in tile services like WMTS. There's also vectors in the title, so I think WFS, which is about querying and retrieving features. It's all a little confusing. To be clear, we use GeoServer's WMS with a special vector tiles renderer to generate tiles. We use GeoWebCache to handle the caching and tiling. GeoWebCache is a bit magical and it just works. WF is in the context of vector tiles is surprisingly pretty much independent. A lot of people feel that because WFS is about access to unstyled feature data, there should be connection between vector tiles and WFS. Turns out, although they do provide actual access to feature data, line work and attributes, they differ because vector tiles provide easy to render features and WFS returns the glorious feature details that needed for actual GIS analysis. GeoServer's WMS rendering process is fairly comprehensive. There's two main renderers. The streaming renderer used to make image maps, draws styled features onto an image. The vector tiles renderer, which used to make vector tiles, draws unstyled features into a data file. In both, the renderers, in the end, both renderers are very similar. They both preprocess and render the data either onto an image or into a data file. Just at the interest of time, we're not going to talk about most of the steps, but to warn you that the job trees will be generalized to make them smaller, perhaps transformed to another CRS, and in very small sub-fixal features will be maybe omitted. In almost all cases, this is exactly what you want. I'm going to talk about clipping and controlling the creation of vector tiles using SLD. We clip, so we're only sending what needs to be drawn to the tile, saving bandwidth, storage, and rendering time. But we don't clip right at the tile boundary. If we did, the tiles would look wrong at the edges, especially if you're using thicker big styles where features cross or near the boundaries. In fact, I recently fixed a clipping too close to the boundary issue, and it makes maps look horrible, as you can see up in the corner. Just a quick note, because of the extended clipping area, you will see overlapping data in adjacent tiles, which isn't a problem when you're rendering, but it's sometimes something to be aware of if you're doing any analysis with the tiles. Open Layers draws all the data on the tile, but the rendering is clipped to the exact tile boundary. This is exactly what you want, and your maps look great. The final piece of the puzzle is how we decide what data is in the tiles at various maps scales. For example, when zoomed in, I only want to show highways. When zoomed in, I want to draw most of the roads. Incomes to SLD, Style Layer Descripture, which is a standard OGC way to describe how to style a map. SLD is usually associated with making pretty maps, colors, bills, and thick lines, things like that. Styling is the job of the streaming renderer for drawing images. The vector tiles renderer is simple. If the SLD rule would have drawn an image, a feature onto the image, it draws it without any styling information into the vector tiles data file. It's as simple as that. If your SLD would have drawn the features onto the image, it will be in the vector tiles data. There's basically three parts of the SLD rules. Scale, how zoomed in and out you are controls what rules are getting turned off. Factor, which tells you what data to use. And styling, which isn't using vector tiles, but important for image tile maps. You could make an uglier map, but I think you would have to really try. I'm showing a SLD rule for showing residential roads using a data filter only when you're zoomed in more than 1,000 to 70,000. For the streaming renderer, it's shown the image with blue lines when you're zoomed in. For the vector tiles renderer, it would get put into the vector tile data file, no styling information when you're zoomed in. Remember when the streaming renderer would have drawn the feature, the vector tiles renderer puts it in the data file. I'm using a really ugly map here to make the styling easy to see, but also prove a point. With vector tiles, you get to decide what your map looks like. You're not stuck with my cartography decisions. And that's good. Traditionally, you would use geowed cache to tile and cache your WMS images using this SLD. And boom, you've got a really ugly web map that shows different details of data depending on how zoomed in you are. Really standard stuff. But if you want to use my data, but not my colors or my style or my choice of styling, click that's all it takes. Five seconds to make this available as vector tiles and geowed cache when we power the client to make the styling decisions. Now we're cooking with gas. Here's an example of an image and a vector tiles map. I put the same styling in both maps as you'd expect. They look and behave pretty much exactly the same. You can use vector tiles in most situations where you would use image tile maps. There are two types of common changes I want to show you. First of all, it's just changing how the client map looks styling. Andreas will be demoing this later on. Second of all, it's changing the server's SLD to change what data gets put into the vector tile. The first is simple. A styling change is shown here. This is where vector tiles really shine. If you want your map to look different, you just have to change the client's styling function and nothing on the server changes. You don't have to regenerate the tiles. I updated the open layer styling function to do road casing, as you can see. I don't have to change anything on the server. I keep using the exact same tiles and I now have two different looking maps. The second type of change is more complex. My map is missing some of the features I want to draw. The features are just not in the vector tiles data. The SLD isn't putting OSM footways into the vector tiles because there's no SLD rule that says to do that. To add the state of the vector tiles, I update the SLD and add a rule so it renders the footways into the vector tiles, as I show here. GeoWebCache will magically notice the SLD change and reset your cache. This is an example of a change on the server to change the actual content of the vector tiles data. The previous was a client change to change the styling of the map. Now that I've added footways to the server vector tile data, I also update my open layer style so it renders them. I've chosen pink dash lines to go with my world's ugliest map theme. This is the basic idea behind vector tiles in GeoServer. We use SLD to render features, including generalization and clipping, into a data file that GeoWebCache makes available in a cached and tiled manner. It's really easy. Andreas is going to talk about open layers in the client now. Thank you, Dave. So let's take a look. For those of you who are not here in the morning session, open layers is a mapping library that can map pretty much everything. Images, image tiles, vector data, and now also tiled vector data. It can do so in any projection, you can even have rastery projection in open layers. It has full rotation support, so if you have a navigation application, you can have a head-up map instead of a north-up map. We support animations that transition your viewport from one zoom level position, whatever, to another. You can combine these animations even so you could even fly through values or whatever you could think of, and it integrates very well with other libraries, like, for example, cesium or D3. Vector tiles in open layers were added in version 3.11, I think, so about 10 months ago. We already gave a sneak preview last year at the conference in Seoul, but that was not in a release yet. And we support basically all the vector formats that are available in open layers, but Mapbox vector tiles is the preferred format because it has the best optimizations for rendering. And styling works the same as with untiled vector data using open layer style functions, and I'm going to show you how these style functions work in a minute. Also because attributes that are required for styling are transferred with the vector tiles, you can have some interactivity in your map, for example, hover over features and get information about that feature. So you have access to the feature attributes. And I cannot stress this enough. Dave already said it. It's not a replacement for vector as in WFS data. It's really not made for analysis. It's made for rendering. And Spectre tile support in Open Layers is encapsulated in the OL format MVT class. It uses Mapbox's PPF library to read the binary tile data, and it uses Mapbox's vector tile library to extract the layers and features from the vector tiles. In Open Layers, what we add to that format is you can also configure it to only read a subset of the layers that are available in the vector tile. And instead of creating OL feature instances, which have the full geometries, full attributes, full event listening for changes and everything, we create very lightweight render features that have coordinates in pixel space that are very fast to render. So usually you would create a vector tiles layer like you would an XYZ layer that has raster tiles like an OpenStreetMap layer. But in OpenLayers, we can take advantage of the WMTS output format. So this code snippet shows how you could read the WMTS capabilities from tier server for a layer that provides Mapbox vector tiles. And with the tile URL function and the tile grid that we get back from the WMTS capabilities parsing, we can finally create our vector tile layer. OpenLayers uses the class OL layer vector tile for vector tile layers, and the source has the same name. And the most interesting part is the last line here in this code snippet, the style. OpenLayers style functions are called with the feature that's being styled and with the resolution that's used to render the feature. And this allows for very flexible styling. One simple example in this snippet here, we see streets with road casing styled nicely by having two styles, one with a thicker line width for the casing and another with a different color with a thinner width for the actual road. And OpenLayers also has Zindex support so you can tell the renderer the order that you want the features rendered. So obviously you want the casings below the actual street. For getting interactivity when hovering over the map, you can also use the same standard OpenLayers features that you would use for vector layers. You can register for a pointer move event which gets triggered whenever you move your pointer over the map. And inside that listener, you can use the for each feature at pixel function. And that function is called with the feature. And in this case, we just want to display the name of the feature in an overlaid at this position next to the feature itself at its coordinate. Let's do some live demo to show you these features. I'm going to move those to the big screen. Where is it? Here we go. And now I should see where to make it full screen. Somewhere here maybe. Here we go. All right, so this is the ugly map that Dave had mentioned, which has the same basic styling that he used in his SLD. I went ahead, took the same map and changed the styles a little bit with road casings and everything. And I don't know if you've noticed, but the map that Dave has shown was using EPSG for 3-6 as projection, so it might look a bit unfamiliar. So I made another change and requested the web marketer tileset instead from GeoServer and I ended up getting a map that looks a bit more familiar. You can also see labels here. And the nice thing about vector tiles is when you rotate the map that the labels stay upright. Let me see if I can rotate the map. So the labels stay upright. And I also mentioned the interactivity. So as I hover over the features here, you can see the names of the streets. Looking a bit in the future, I'm currently working on a library that translates mapbox GL styles into open layer style functions. And this is how it looks. Currently as you can see, there's obviously no support yet for line labels and point features. That's what I'm currently working on. And one feature that's still missing in open layers is labels that follow curves or lines. That should also be available very soon. And then you will really be able to get very nice looking vector tiles maps in open layers. And this already brings us to the end of our presentation. And I'm just not able to find the slides again because I don't know how the screens are arranged. But that shouldn't keep you from asking questions. Thank you very much. Thank you guys for this. So thank you guys for this really great talk about this exciting new features. And I think it really adds a lot to web mapping. I didn't register any questions during the talk. But I think there might be questions. So we start right here. Hello. Did I get it right that I have to define style twice first in SLD and server and then again in open layers in client? There is no way to have it done it only once. Very good observation. That's really a thing that we've been struggling with in this whole phosphor G ecosystem for years that there's no good story for transferring styles from one system to another. The SLD format is an XML format that is suitable for doing that. But it's also quite a verbose format and open layers has added quite a few interesting additions that are not part of standard SLD. So what we have in GeoService somewhere in between SLD and symbology and coding, it adds geometry transform functions and other interesting features. And my personal opinion is that with the new MapboxGL style format for the first time in this phosphor G ecosystem, we have a style format that should be easy to transfer styles between systems. And I cannot make 100% promise. But if I have funding and time to develop it, I will also be planning to work on a library that translates SLD to MapboxGL styles. And then you have the whole round trip from GeoService to open layers. You could even do a get style request, WMS get style request, take the SLD and turn it into open layer style functions and then get the same rendering in open layers. Just to answer the question. I hope so. It wasn't me. Thank you for the great speech. I have a thing that I would like to ask you. We always run in kind of the same problem, which is about print. And have you tried printing with vector styles? Is that possible? I mean, you could do server side prints for WMS maybe or client side print. I can say a few words about client side printing. In open layers, the whole map is rendered to a canvas. And you can take this canvas and print it. And if you want to print at a higher resolution, all you have to do is change the pixel ratio that you rendered the map in. And the tile will get rendered with a higher resolution but using the same styles. So that should be easy if you want to print from the client side. There are even libraries that can on the fly create a PDF from whatever you add to it on the client. And an official demo also in the open layers examples, if you look for PDF and print, that's for client side printing. I cannot answer the server side printing question. Okay. So now we have five minutes. So if rooms have changed, then probably it's the time. I actually have one question here. So I was just wondering about the SLD on the Geo server side. If you put in a rendering transformation or a geometry transformation, will that be reflected in the tiles? No. Okay. Okay. So are there any more questions left? Because we would have five minutes. Oh, okay. Sorry. Actually, when you have a question, you know, you can just shout out, you know, because there's no need to be shy. And it's kind of hard to spot people, you know, because the room is not that easy to scan. Actually, what I'm doing when I'm sitting up there, I'm trying to scan the room so you can raise your hand during the talks. I'm wondering about performance. So as we were showing the demo and you tried to rotate it, it was like tick, tick, tick, tick. Is it normal or is it getting better somehow? I think there was something because switching screens a lot, so I didn't get the full canvas rendering performance. But we do have a room for performance improvements for labor rendering still in OpenLiOS. So if you rotate the same map without labels, it will be faster. There is something with labor rendering and it's the halo that you add around the font that takes a while to render. But there are ways to improve that. So that's also on my list for performance improvements. Okay. So I would have one more question over here. I think we're doing okay in time. So. Yeah. Thank you. Just one quick question in terms of preparing the vector tiles. Is that possible on the fly as well? Or does it always have to go through geo web cache? No, you can directly request vector tiles from the WMS. So you can generate them on the fly. Okay. I don't see any more questions. And we did quite well on time. So thank you again for this great talk and this introduction.
The latest release of GeoServer adds support for creating Vector Tiles in GeoJSON, TopoJSON, and MapBox Vector Tiles format through its WMS service for all the vector data formats it supports. These tiles can be cached using GeoWebCache (built into GeoServer), and served with the various tiling protocols (TMS, WMTS, and WMS-C). Thanks to very recent OpenLayers 3 development, these Vector Tiles can be easily and efficiently styled on a map. This technical talk will look at how GeoServer makes Vector Tiles accessible through standard OGC services and how they differ from normal WMS and WFS usage. It will also look at how OpenLayers 3 - as a simple-to-use vector tiles client - interacts with GeoServer to retrieve tiles and effectively manage and style them. OpenLayer 3’s extensive style infrastructure will be investigated.
10.5446/20332 (DOI)
So, I'm going to start with the second speaker. Okay. Then now, our second speaker will talk about the scalability issues related to geo network. And it's Joana Simuels. I hope I pronounced it right. Yes, thank you. Joana. Hello. Good afternoon. First of all, thank you very much for being here. And thanks for having a vote for us. It's really nice to be here in this big room. So, continuing a little bit more, what Maria said, I'm going to focus on one specific aspect of geo network. So, first of all, like a small poll, how many people here use or have used geo network? Can you please raise your hands? Okay. And how many people work with big data? So, with data sets, let's say larger than 32 terabytes. Okay. We have a couple of people. That's good. So, in this talk, as I said, we are going to speak about the scalability of geo network. So, basically, go through some of the limitations. And more than anything, discuss some proposals and some scenarios for geo network. So, first, setting a little bit the context for big data. Maybe it's not a reality for everyone. But there are, in fact, some use cases because the number of variety of data sources have increased a lot with the sensor data and also user generated content. And a great deal of this data is actually some sort of location attribute. So, we are looking at large spatial data sets. And this is going to be more and more common in the near future. So, what happens when we have really, really large data sets? We can always increase the CPU, the RAM, the number of CPUs, the RAM, the hard drive. But at a certain point, we can no longer increase it. And that's the limit of vertical scalability. And then, when we need to start thinking about distributing, about horizontal scalability, and mostly this is what we are going to focus on this talk. So, you probably all know what is geo network. So, it's a catalog, as Maria said, for geospatial information. Most of the people here are users, so I don't have to explain a lot about this. So, just a few important details. So, geo network stores accesses data which can be distributed remotely, but it stores some metadata locally on a database. So, some data because metadata is really data. So, it actually stores data locally. And as Maria also mentioned, it uses a search index which is based... Oops. Okay. So, there is some problem with my presentation. Okay, there seems to be a couple of empty slides here. But it is just one or two slides, so maybe I just talk. Okay. Yeah, I just continue. Yeah. Sorry about this. So, I was... What I wanted to say in these empty slides is that there is... So, there is this index which is based on Lushin. And this index is stored locally on each geo network instance. So, later we are going to see this limitation. And the second thing is this database that I describe is in fact... So, it's in fact a relational database because of the way databases are supported on geo network, the library that is used. It actually only supports relational databases such as Oracle or H2 or PostgreSQL. So, there were some proposals for scaling geo network. So, I mentioned big data is like an argument for scaling geo network. And maybe this is not relevant for most people. But there are other arguments that are probably more relevant such as high availability. So, if we have like a cluster of nodes, then we can... When there is a one node dies, the other one can take place. So, it's a failover scenario. This is quite useful. There is also the scenario of load balancing. So, we can actually distribute the workload. So, these are arguments that are relevant probably for most people. And there were already proposals that were looking at scaling geo network. So, based on these arguments... Oh, another empty slide. It's quite challenging. So, the original presentation. No, the original one was PDF. So, I'll try to copy it again just one second. Or maybe just read it from... Okay, this one seems okay. So, the current scenario is that geo network cannot be clustered. And the limitations are so basically the ones that I mentioned, the machine index. There's also the file uploads. So, they are stored locally in the computer file system. And there are a few other things. So, I'm going to explain these proposals. How they look at these restrictions and what they suggest for overcoming them. So, the first scenario is this one that comes from the proposal, which is the one of sharing features between different instances. So, you can see in this image, we have like the different nodes. And each node has its own Lushin index. But this index is actually synchronized between instances using a message broker. So, using the Java message service. So, when there is a change in one index, the other instances can know about it. The other elements, the data directory is also shared. So, we can do this by using a shared file system, a network file system. And the database is also shared. So, this scenario needs some synchronization between the nodes. And there are a few other aspects that have to be handled. So, like the site UID needs to be not tied to an instance. But it should be assigned to one specific node. And the same goes for the HTTPS sessions. So, the second scenario is the scenario of having, actually, we no longer have one index per instance. But in this case, we have a search server. So, we can use Solar, which is based on Lushin. And Solar is a search server that has a lot of nice features. But the one we are interested for, for the scope of this presentation is index sharding. So, we can actually split the index and distribute it through different instances that can be also in different physical machines. The other elements, you can see it's kept, so it's kept the data directory and the databases. And we still need the message broker to synchronize some information between nodes. So, this is an example, just for you to see, this is an example of a Solar architecture with the distributed nodes implemented using Docker. So, interesting enough, the improvements, so the replacement of Lushin by Solar, by the Solar server, is something that is being implemented in one branch of the network, in the Solar branch, mostly by Francois. I don't know if it's still here. It's not here. And there are plans to merge, so these developments in a geo network 4.0. So, this could actually be a reality. So, this is the third scenario. And the third scenario is actually distributing everything. So, you can see the file system, it's no longer a shared file system, but it's a distributed file system. For instance, for instance, ADUB file system. And in this case, we have multiple nodes. So, in the case of ADUB, we have the name nodes, which can also have a backup in recent versions. And we have the data nodes. And we can store very large files because they can be split across machines. And this should be transparent for geo network. The message server, the message broker itself could be clustered. So, if we take, for instance, RabbitMQ, we could also cluster the message broker. And so, we have the search server, also clustered. And we have the database distributed as well. So, the question is, can we distribute the current databases? So, I said before, they're relational. It's a little bit tricky to scale, to distribute relational databases, although there are efforts to do it. But by design, they are not really, they were not really designed having that in mind. So, there is another blend of databases that's more suitable for horizontal distribution. And these are the NOSIql databases. So, one example, there are many paradigms for NOSIql databases. One example is the documented, oriented databases, such as, for instance, MongoDB. So, if we use such databases, we could distribute them and cluster them very easily. I said before that geo network is using the Spring Data framework, currently using the GPA library, which is based on JDBC. There are other Spring Data libraries that support actually NOSIql databases, such as MongoDB or even CouchDB. So, all these changes that I mentioned are actually quite difficult to implement if we have a monolithic structure. So, if we think about an architecture that is more based on services, so a service-oriented architecture, it will be much easier to scale the parts of the, of the network that we want to scale. So, we would like to reduce the complexity of implementing this scenario. So, this is where we can talk about microservices. So, I've shown quite a lot of slides with Docker during this presentation. And it's because I think it could be a very helpful tool to help us to implement this scenario. And from the latest version that was released less than two months ago, Docker integrated natively clustering into its engine. So, I think it would be very suitable if we want to think about this kind of microservice architecture scenario. So, just to finish some final thoughts, so more than anything, I would like to promote some discussion around this topic. So, the proposals that I showed you before the first proposals, they were actually based on a Genetwork 2. So, as Maria said, there were quite some radical changes from Genetwork 2 to 3. So, probably it would be a good idea to review them at the light of the current codebase. The solar, the branch, the merger of the solar branch that I mentioned before is something very positive that could contribute to the second scenario that I described. So, where we have like index sharding. But the first scenario, we are still quite far from it. So, this is more than anything, this is like an expression of interest. And this is where I, this is the moment where I can suggest some activities that could help us to implement this scenario. So, one thing would be to look at extending the database support to other type of databases to do this refactoring, to use microservices, or at least to look at the feasibility of this refactoring. And of course, I think Docker could be very useful to test this kind of scenario. So, I want to finish with just a reflection, so with a call really for everyone. If people are interested in contributing with code or with funding towards this scenario, I think this could be viable for the future. And it could make Genetwork a very robust catalog for the next century. So, thank you very much. Thank you, Johanna, for this talk. We have plenty of time for questions and I think we just merged the two talks we've heard so far. So, if you have questions to Maria, you can ask them as well. So, questions? Just say to a new one. Yes, well, it could be to both. Just on one of your last slides where you're talking about Mongo, was that just for the indexing or for the file, actually, your geo data as well? So, maybe I was not clear before, but so Genetwork stores some data as well, so some metadata, but it's data, so it stores it in a database. So, I was suggesting to use Mongo to distribute this metadata. As for the data itself, it can already be remote. Do you want to explain this further? I don't know what slide is. Okay. I mean, she's asking about the data. If Mongo is... I think that metadata itself is very structured, so maybe no SQL databases, not so good, but it can be used. Yeah, it can be used. So, it was for storing the metadata. Okay, thank you. Any other questions? More questions for Juana. There was a slide concerning the Hadop file system. And for it to notice, it looked a bit like Hadop 1, because it doesn't have the yarn, controller, and other things there. Have you looked into this? Which one to use it? First, second, any reason only to use the first one? Or is just an example, and you're still thinking about it? We were actually not using any... Yeah, but for the first scenario, could be any reason that you prefer to use Hadop 1 instead of Hadop 2? I haven't checked that. No, it was just... It's just an idea. Yeah, we were just a slide showing Hadop. It's really far from... At this point, we are more on an analyzing stage, more than just thinking if having shared file system or having no SQL or something like that could help. More than really looking on the exact technology to use. So yeah, the example was a Hadop file system in Mongo, but it could also be instead of Mongo, it could be, I don't know, CouchDB or other technology. Yeah. Another question? I forgot to tell you, I forgot to tell you this. Oh, look, it's in the screen. In the end, I ran and I got to finish the time. Yeah, that's it. Sorry, you talked about architecture of the Yahoo Network, but I want to add also about another direction that Yahoo Network, for my opinion, need to consider. It's a dynamic relation between metadata and the data. Do you have something to say about this connection? You mean also storing the data or you mean how to relate data and metadata? How to link? How to link, but maybe how to update your metadata when you update your data. Okay, for WMS and WFS services, if you configure it right, you can automatically generate almost all the metadata just by scrapping the capabilities. You're thinking something like that, right? Something like that, but also when you're... It's a long time that I'm working with Jo Network, but what I'm looking for is a link between my data available, not in WMS, but directly in my database, in my PoSGIS database. And create quite dynamical... Yeah, the main problem with that is that you can have so many different formats of data that you cannot find a common solution to all of them. You have to check a solution like the WFS, WMS service, in the get capabilities, adding some special tags. So on the database, if you have... We can do some kind of harvesting for a database, but then in the database you will need somewhere to place this data to build the metadata. So in the end, is it useful to have the metadata somehow on the database so you can scrap it and build it on some big file, raster file? Is it useful that or it's better to keep it only on the metadata catalog? I think it depends on the use case. Can I jump in Maria? Yes, please. Hi, this is Paul. Yeah, I'm in the next presentation with Joanna, so let me jump in a bit. I think we have to make sure that there's... I think we have a lot with Jo Network that we manage a lot of use cases. So people use Jo Network within their local organization, and this kind of features is really helpful. For that use case, it's really useful. The other use case is a national SDI, which harvests a lot of local Jo Networks, where the status of a document can never be changed because it represents something that is submitted by a local government. So the national portal has a kind of legal status, the document itself. So these are two conflicting interests that we try to solve in one software, which is a challenge we have a lot. But your use case is very interesting to look at. Yes, but the thing is that there's so many use cases that it's difficult to find a common solution, right? So in the end, we find a solution with the services WMSWFS, which is one of the most straightforward. And I think there's some for databases, not a Jo Network harvester, but there is some... I guess the databases have this kind of metadata there. So if you have a concrete use case, please share on the mailing list of Jo Network, and we can check what we can do. Okay, thank you for your questions. Is there one last very quick question? Just a little comment about the scalability of the underlying database. There are concepts inside Postgres SQL that could be helpful for you to manage that. You don't need to switch for that reason to a different database engine. It's bidirectional replication. Is there an add-ons that can manage that? Yeah, I was mentioning that there are actually... There are some projects that scale, that cluster relational databases. But what I was trying to emphasize is that the NoSQL databases are somehow by design more... So they were more... Yeah, so they addressed this from the very beginning. So it's something that is built into their logic also. But for Postgres, there are attempts for doing it, yes. Yes, and it's true that usually the bottleneck is not on the database, but somewhere else. Maybe doing a complex query of whatever you're doing. Well, thank you for this little discussion.
In recent times, phenomenon such as the Internet of Things or the popularity of social networks, among others, have been responsible for an increase availability of sensor data and user generated content. To be able to ingest, store and analyze these massive volumes of information is a standing challenge that is no longer ignored. The data about this data is generally speaking, less of a problem, if we think for instance that trillions of sensor records, may share the same metadata record; for this reason catalogs have been less exposed to the challenges that took by storm the database community. Nevertheless, a large variety of datasets can also pose some performance challenges to traditional catalogs, and demand increase scalability. In this talk we will look at strategies for scaling GeoNetwork through load balancing, at its current limitations, and we will discuss potential improvements by adopting distributed search server technologies such as SOLR or ElasticSearch. On the database side, we will review the current database support, which is limited to ORM, and discuss the possibility of extending it to support NoSQL databases, which could be horizontally scaled, unleashing a new generation of metadata storage.
10.5446/20329 (DOI)
First, Marco will tell about new features of Kooji.is and then Hugo will continue with the new features or new version of Kooji.is 3.0. For the questions, if you have questions for the speakers, wait until the student will pass the mic to you so we can have those questions also to our audience in the video streams. Okay, Marco, please. So welcome to this talk about the new features in Kooji. Last year at Fosfoji, the current Kooji's version was 2.10. At Kooji's, we have a release schedule of four months. So every four months, there's a new Kooji's version. So since last year's Fosfoji, there are appeared three new versions, 2.12, 2.14 and 2.16. 2.14 is a so-called LTR. LTR means long-term release, and which means that this version will get updates and bug fixes for one year. So normally, the power users are upgrading each version, and the users who want to be on the safe side, they normally upgrade once a year from LTR to LTR. Now the new features, they are simply saying there are too many new features to tell them all. If you are interested in all the new features, then you can go to see the visual change logs on the web. There's a very detailed list about what's new, and it's really worse. If you're using Kooji's daily, it's really worse to look there because there are a lot of small things which are quite handy, but it's quite easy to overlook them when you just see the Kooji and have no description. So that means, first we are going to look at the development activity in Kooji's. Not surprisingly, it's going up and up. This is now a statistics from OpenHop.net, like a site which analyzes how many commits and how many contributors per month, and so on. And what we see here is now a slow project start, continuous growth, and then since 2014 very high activity, very high development activity, means there is a lot of interest in the project. And not only a lot of commits, but also the number of contributors per month is rising. So that's a very good sign. When you go back 10 years ago, many experts in the fields were saying that this web services desktop GIS will disappear and blah, blah, and so on. In my experience, and we see it here, it's quite the opposite. The interest in desktop GIS is even growing, and the web services are a good complementary to desktop GIS because desktop GIS, once you have web services, then the administrator needs to look at the data, people need to edit data and so on, and power users are using desktop GIS as client for web services. Some numbers related to the projects, over 800,000 lines of code. I hope we soon reach the limit of one million lines of code. Nearly 34,000 commits by 304 contributors, and openhub.net says over the past 12 months, 111 developers contributed new code to QGIS. This is one of the largest open source teams in the world, and it is in the top 2% of all project teams on openhub. So very impressive. Another thing that I noticed when going through these change logs is that not only the number of commits and the activity is growing, but also the, I would say, the commercial activities are growing, so QGIS quite nicely combines voluntary work and funded work. So nowadays, most development activity is actually funded by someone. Now let's go to the new features. A very important one is the web feature service client in 2.16. It has been completely, nearly completely rewritten. The old WFS client was okay, however it was only supporting WFS 1.0, and some things got broken during the shift to multi-threaded rendering. So the new WFS client supports versions 1.0, 1.1, 2.0, and it has a better caching strategy. So downloaded features are stored in a local special IDDB, so they don't have to be downloaded each time. And advantage compared to the old client is the old client was caching everything in memory, so if you have really a huge WFS, then you won't run out of memory anymore. The download takes place in the background, and it is even so clever to use get feature info, get feature paging, so it downloads the things in chunks. And it also supports some WFS 2.0 concepts, for instance, in WFS 2.0 it's possible to use joins for get feature call, so that is now also supported by the WFS client. Then this is now really, for me, it's really a killer feature. It's not even in the QGIS core, it's a plugin. It's a plugin to debug Python plugins. It's very important. People who write a lot of Python plugins had the problem on how to debug it, so people are very using prints, or there were some possibilities to use other debuggers, like PDB or remote debuggers, but they all had some kind of inconveniences. Now there is the first add, the name of the plugin is first add plugin, and it provides a graphical debugger, supports breakpoints, step in to step out, step over, continue, so it makes your plugin development really more productive and more fun. A new feature in 2.12 is the authentication framework, up to 2.10. It was necessary to enter passwords before creating, for instance, a DB connection, or you could store it, but then it would be stored in the project file, and if in one company people pass in project files around, then they see the credential of each other, so it was not really a good approach. In 2.12, there's the new framework, that means passwords can be stored in a local specialite DB, and you don't need to remember them each time. You can have a master password, and you enter it at the beginning, at the first access, and then it will, QGIS will take all the credentials from this database. Of course the old storage mechanism in the settings or in the project file will still continue to work. Good thing about this authentication framework is that it's possible to extend it for different authentication methods, so usually inside bigger organizations people have very strange authentication mechanisms that they have to support, and it comes from the IT, and it's now possible to even support those by plugins, even Python plugins. An area in QGIS where there is a lot of dynamic is the symbology. It's actually a good example of how a community works, because people come up with a lot of good ideas. Normally when I see some new features the first time, I sometimes think, oh, who the hell needs that, it's too exotic, but then after reading the description and seeing some samples, I think, oh, that's really very handy, I want to use that too. So it happens for me very often. The symbol marker symbolization has been improved. There is this size assistance that you see here, so if you have data defined size for your marker symbols, this size assistance helps you to pick a good choice for the size. There are even different scaling methods, so quite you can scale the size of the markers depending on diameter, depending on area, that's most often used, and quite handy is this Flannery method, because people have found out that if you're using area, then usually the human eye underestimates the size, so this Flannery method adds a little correction to it so that the human eye can better percept the value. Other nice feature for thematic cartography is the arrow symbolization. Here it's possible to symbolize line strings by arrows, and the real good thing is it's all linked to this data defined size, so it's quite handy to display information like import or export or how many people travel by train, by car, and so on, because you can set the size data defined, and you can set even the width at the beginning and the width at the end, so many options for thematic cartography. Another important feature is the rule-based labeling, so before 2.12 we could have one label setting per layer. Now normally, or quite often, people want to display within one layer different labels for different categories, that's now possible with 2.12. I remember that, for instance, to display OpenStreetMap data in the WMS benchmark in 2010, we had to make a lot of workarounds because we didn't have that feature, so we had to place hundreds of layers for the project because the labeling was very complex there, so this isn't necessarily anymore. Another nice symbolization feature is the 2.5D symbolization. It's quite handy to display, for instance, buildings. You can make like a Psoidos 3D effect, and here in this picture we even see all these symbolization options can be combined together, so we see here is a 2.5D symbolization combined with a shadow, with a drop shadow of the layer effect, so by combining all these possibilities there's a huge opportunity to make very beautiful maps. And that's another feature, the no symbol renderer. What does the no symbol renderer do? It does not render symbols. That's one thing, the first thing, oh, it's really a feature. Actually it is for some cases we want to see only the labels or only the diagrams for a layer or only select objects but not see the object. And before that no symbol renderer, you need to apply complicated work arounds to make all symbols transparent and so on. Now this is easier with the no symbol renderer. On the expression side, there has been also a lot of improvements. Don't know if you are familiar with the QGS expression. QGS expression is like dynamically calculated value. In desktop.js you usually see that pocket calculator symbol in the toolbar. That's for it. But it's not limited to it. These dynamically calculated expressions are used in a lot of places in QGIS. So in the field calculator, there we would expect it. But we also can use it for labeling, for instance, to concatenate strings to make a new labeling expression, something that does not exist as a field. You can use it for data defined symbolization, for tool tips and also in the print composer and probably even in more places. And you can use it from plugins, of course. So until now, these expressions worked only on one feature, so you could use attribute values in your expression and then make fancy calculations and so on with it. So now you can also use aggregate functions. You can, for instance, say in my label should be the mean or the median of the future feature and the minimum and the maximum value string concatenation and even more possibilities. QGIS is not only a desktop.js, it's a GIS library, a framework and so on, and it's also a server. So on the server side, there have been also some improvements. One thing is the redlining. Where it's possible to, in the request, to QGIS server give like a geometry and a label that should be placed on the map. Use cases, you see it here in the web GIS. We used it for searching. Before that feature, we could still highlight feature that have been found by the web search. However, sometimes you want to search by features that are not displayed in a map and you want to label them. And for instance, here is a place name. They were searching by place name, but the polygon of the place name was not in that map. So now we can pass it in the request and the good thing is we also pass it in the get print request, so we can show it on the web map and we can show it on the print output. Not a nice feature is the DXF output of get map in QGIS server. So it's quite easy that way to provide an option to download DXF for people. So in municipalities, people still work, a lot of people still work with DXF. So that's quite important and the good thing is the DXF output is also symbolized quite well and it includes labeling, symbolization and so on. Third feature of QGIS server, new feature is there's now possible to make Python plugins for the server. There are a few hooks where Python plugins can chime in in the server. So it's possible, for instance, to write filters to modify input requests and it's also possible to make like security access control by Python plugin. So many nice features and even more. So question comes up, 2.16, what's next? It's easy. Next is QGIS 3.0. And that leads to the next presentation by Hooker Mercia, QGIS 3.0, plans, wishes and challenges. Thank you for your attention. Thank you. Thank you and very impressive work that is going on in the project. I have one specific question regarding the last slide, the last feature on the QGIS server. Is it possible to combine, you talked about redlining passing on a geometry on the request, is it possible to combine that when doing DXF export as well? Do you know that? It's not done in DXF export, it's only available in get map and get print. And only in get map roster, not with DXF. However, I think it should not be a big deal to extend that also to DXF. Other questions? So thank you Marco.
Since FOSS4G 2015, two new QGIS versions have been released. At the time of the FOSS4G 2016, it will even be three new releases. This presentation shows some highlights out of the huge number of new features. For instance the labeling system received a number of enhancements which might not be obvious for users by just looking at the GUI. Another major improvement is the new authentication system in QGIS 2.12. In the area of cartography, there is the new 2.5d renderer, which allows the display of 3D-like visual effects. And the release of 2.16 end of June 2016 will bring some other hightlights for sure.
10.5446/20325 (DOI)
This is the final presentation in this session, also about QGIS, but this time it's about the new QGIS client, web client. The presentation is with Andreas Neumann and Pierre Mencalper from Switzerland. We actually have a third person on the slide, which is Karl Magnus, he's in the middle. He's also part of our project team. But as it's not so useful to present with three people, we decided to do it just this too. QGIS web client, a quick look back. It was introduced in 2009 and the goal was to have, as with the previous presentations, being able to transport the QGIS desktop project and publish it on the web. So the whole idea is not having to edit map files or SLDs or whatever complicated formats there are to configure or CSS or to configure web mapping applications. Make it really easy for users to publish a map they already have in the local GIS infrastructure. The QGIS web client one was a few only and it's built upon OpenLayers 2, which is already faced out as you know, and XJS3, which is already faced out as you know, and GX2, which is also has a successor. The bad thing with libraries is they are not compatible with the next version. Basically it means you have to rewrite the whole system because X3 to X5 you cannot port easily or the same with OpenLayers. It's specifically the client was built around the extended capabilities of QGIS server. QGIS server as you may know is a WXS server, so WMS, WFS and so on, but it has few extra extensions mainly for stuff that is not in the standard. Things like printing, if you want to use printing from the printing template you already have in QGIS, WMS doesn't have a printing standard, so this is one of the extensions we are using. Or some other extensions, they get feature info command for example is not very standardized in WMS, which means if you want to have some additional functions in a get feature info you have to amend something yourself. It was starting 2009 in Usta, then others joined in, translated into 15 languages meanwhile and it's available on GitHub at the, oh I forgot the address. Why is it not here? It's actually in a QGIS web where you can find it. This is how it looked like and one of the reasons we want to replace it, a colleague in Switzerland always tells me this looks so much like the 90s. Looks like Windows 95 or something. So it's not up to the modern design that people are used to with mobile devices. So you can see the feature info queries, you can see search function, printing, export, peer management and so on. What's going on? Okay, thanks. This is a recap of what QGIS web client one did and the goal is to, if you want to introduce web client two that it should at least do the same functionality this web client one did. I won't read it all. These are the web client one users. I'm aware of, I'm sure there are more out there but these are the ones I also collaborate with. Many of them also help to finance or organize the new project. Speaking about QGIS server, this is what QGIS server currently supports to serve from your QGIS desktop project. I put WPS in parentheses. I'm not so sure how far it is developed. It's probably work in progress or some alpha version. But there is an idea to use QGIS processing and publish it as a WPS service. I know that the French company Twilies, they're working on it. Yeah, that one I already mentioned, the easy configuration of projects in QGIS desktop. The symbology will always be the same between your server web map and your desktop. QGIS server has an extension called get project settings. What it does is it tries to transport additional information also not available in the WMS standards. For example, information on the attributes, on the field widgets that may be useful for a web mapping client to display, for example, feature info data. PDF printing and it has filter and selection commands. Now the new project, the reasons for that is I already mentioned, libraries are phased out and another drawback of the old project was that they had separate versions for desktop and mobile. There was another mobile web client made by SourcePool that we used built around Open Layer 3 and Chakevary. But the Chakevary is not very well performing, especially on tree structures. There are faster options out there. And we'd like to have one client for desktop and mobile, which is fully responsive as you change windows sizes or have different device ratios. And we want it to be built on top of modern web frameworks. And we mainly looked at Angular and React.js as the two main options that we considered. Of course I mentioned a new fresh look and feel that we want. And one goal is to make it quite modular. And maybe who of you went to the MapStore 2 presentation this morning? Some of you showed how you can enable disabled components. That's something we want to have for QQS web client too as well. So every user can configure it to his own which actually the SPMN will say later we want to reuse the same stuff from MapStore 2. Those are core requirements. And of course one of the main goals is again make it suitable for the QQS server extensions. That's why we cannot just use out of the box any web client which doesn't support it. And further into the project one of the goals is make it very easy to deploy. So a deploy script should take all the required modules and deploy it in a minified, compressed version so you don't have to either serve big versions that are too bloated or do it manually yourself. And the idea is to have at the later in probably next year have a QQS plugin or a web version like we've seen today in the MapStore presentation where you can select components you want and configure them. Also the partners, City of Ooster which is my former employer. I changed meanwhile to Kanton of Zug at the beginning of January but they are one of the first to finance the project and will deploy it next year. Then the Kanton of Klarus, City of Wolfsburg in Germany and the Swedish community will be joining later and Jena and most likely my new employer as well. And the developer company will start the first version will be Suspol which we collaborate for some time already and later on Invit in Sweden will also provide some modules. And once the first version is out there it will be published in the QQS repo so other companies or developers could also join in and help. Now I want to hand over to Pyramin and explain why we chose ReactJS. I will tell you a few things about the technical background. As Andrea said we have chosen the ReactJS framework from Facebook which has some major things we were looking for. It is strictly component based which means you can develop components which are really not tightly coupled. They are very independent. It's not a full featured MVC framework like Angular. It's basically centered on the view part of an application and a map client is mostly view and not so much business logic in the background. And the concepts are easier than other frameworks have. It is performance optimized as a nice concept about updating the DOM in the browser and has a good tooling, development tools, the whole minimizing stuff, minification is you can get it out of the box. So this is a major JavaScript framework we are using and the other one is OpenLayer3. We built on which is very powerful as most of you know it has all the features we need for professional GIS. And it's still fast and modular so you can compile your own version with only the parts you need and this is even built in in this development chain. You get an optimized OpenLayers plus the viewer framework around it. And it's usable on mobile and on the desktop. And the non-technical reason is also that many of these partners already investment in OpenLayer3 development. We started with mockups thanks to Peter Staub from Canon of Claros heavily inspired by the Swiss Topo design. So this is how it should look like. These are three different mockups, the desktop mockup covering the whole screen size, the tablet and the mobile viewer. There are slight differences. You will notice up there the text is not there anymore on the left side. The logo gets smaller. Some buttons disappear. But it's basically the same. And the second page is about the tools. So we have two menus which look very similar on all three platforms and can be popped out making bigger get more space when you go on one of these tools. We have buttons for background maps, the usual things. That's the goal we have. And that's the plan in terms of time. So we started on a prototype and as Andri has mentioned, we don't start from scratch. We want to build on MapsR2. We talked to the developers of MapsR2 and we will collaborate with them. And MapsR2 is a new framework exactly with the technology we wanted to use. And the difference is that it's targeted to a Geo server as a back end. So one part of the work will be to adapt it to QG server on the back end. On the front end, we can use most of it. So we started a few weeks ago and we will already have a code base ready for other developers in September that's the earliest, maybe better to step in later, but it's possible in September. And the first deployments are planned in early 2017 and development will go on next year. So I can show you the current version which is basically MapsR2. That's not the next version, that's the current QG's web client. And what I want to show is the same map in the new viewer and that's a basic MapsR2 viewer. We start, that's our starting point. We build on. Another tool, not exactly the look we want. Now I have a difficult operation over my shoulder. I can show you the same map. The same map to the map from QG's cloud which is QG server map. I select this map and here I have the same map served from QG server. Some tools are included. It's a layer tool, background, selection, and so on. So these are the existing tools. We start our work. I have some more time. We have also identified functionality. We have zooming. We have, we can collapse here, locate on the map, go back and forth, zoom to max extent, all the stuff. Which is needed and here are some more and maybe the biggest difference or one of the first things is printing because that's completely different. With QG server we can use the print layout from the QG project. So we need less work on server side and also on client side but we also need tools to rotate the map in the print layout so there is much work for printing and other things are really are usable and we only have to adapt to the look and feel. So that's the current development. And so the last slide is about how you can help. So you saw this organization sponsoring this first part of the work. That's the financial side. You can also help testing. We will publish regularly on GitHub. And as I said, everybody is welcome to develop. But one of our first steps will be to make it easier for QG developers to start with that framework. So really concentrate on that viewer that you can start and write your own components or adapt it for your needs. And all these things are coordinated by Andreas Neumann so contact him if you want to do one of these three things. So I think it's time for questions. Thank you. Thank you for the presentation and questions. Anyone? Karl Maunus. I know you were supposed to be a presenter here too. How are you using this web here? OK. We're not using it yet. But I've been working with Andreas with requirements documents since last year. And now we have a contract with a developer. So we are getting in with developing resources into the project this autumn. And we'll hope it will work out well for our local municipal in Kyrgyzstan, Sweden. I'd like maybe to make it clear that most of the work was really done by Geosolutions here. We've seen the client not to make false impression that we did the client. So we're just starting built on that client and then we'll move on to develop additional components and integrate it with QGIS server very well. And another question? In the past when I tried to run QGIS server, it was always quite complex task to get it up and running. It worked in the end, but it took a bit of time always. So I wonder if you have any plans to ship Docker containers, pre-configured ones, or similar possibilities to get something which is pre-configured to run quickly? I don't think it's part of that project, of course, someone is welcome to do that, but it's not our main task. We as organizations, we already have it set up. But I think I agree it could be easier to deploy the QGIS map server. It's not talking about the whole framework. But it's not a goal of this project. If someone else wants to do it, you're welcome to reuse it. This is sort of some of the last question. You're using QGIS server because of enhancements to standard OGC services. Is there any, those services are again community driven and able to be improved? Is there any feedback or loop you have for improving the specifications for things like WFS by identifying the Lex you see in it and how you're meeting those and improving the standards to address those Lex? For the first part, it's not just the standard extensions that we are using, but we also like the fact that it's easy to configure. So the styling a map in QGIS is very easy and people do it sometimes anyway because they need it in the office. So that's one of the main reasons I think. But feeding the improvements back to the standardization organization, I actively did not do it because I didn't see any activity with WMS so far in the latest years. Is there any? It's a good idea, yeah. Do you want to comment on it? I mean, other map service they have a rest interface with additional functionality and this isn't standardized either. So maybe there will be some standard in the future. I mean, all these projects they have additional functionality and they want to expose that to the user so they extend the interface. But they lose something when there is a standard because then you have like a common denominator for everyone. So maybe it happens, maybe not. Will you in the future, will you collaborate with the Italian company that's, oh, it's Geosolution to further develop the core code base for the two projects so what? That's definitely planned. Yeah, that this our work is also fed back into MapStore 2 and usable with GeoServer as well. So you could have the same interface with GeoServer maybe. So that should be possible. Thank you for your presentation and now it's time for coffee break.
QGIS Web Client (QWC) is a Web-GIS client based on OpenLayers 2 and ExtJS and tailored to use special extensions of QGIS Server, such as extracting information from QGIS Project settings, extended GetFeatureInfo Requests, GetPrint and DXF export. It uses standard WMS/WFS commands, but extends them where needed. QWC is used by several cities and provinces in Europe. There are four main reasons why QWC needs to be overhauled: The code structure is not very modular and should be better structured. QWC only works well on Desktops. Despite a separate mobile web client based on OpenLayers 3 and jQuery Mobile, for maintenance reasons it would be much better to have a single web client that uses responsive design and works for all devices from a single viewer. The base libraries ExtJS 3.4 and Openlayers 2 have been phased out and there are newer versions available. However, the upgrade to the newer versions is not trivial. Having a more modern foundation based on newer web technologies This presentation discusses the requirements, the progress of this project, technical decisions taken and challenges solved during the project. While the first goal of the project is to establish a modern foundation for the coming years and reach feature parity with the old QWC project, it is already planned to implement a QWCII python plugin that offers a GUI and assists with the global configuration of the client. This tool should also facilitate the management of topics and projects.
10.5446/20324 (DOI)
We will hear the next presentation from Martin Turner and Calvin Metcalf and I'm excited. I'm going to talk quickly about the economics of bringing a new geoproduct to market. There are a lot of things that have happened in the technology industry in general and in the open source community that have made bringing a product to market easier than ever. I'm going to quickly go through talking about the cloud and open sources as engines to innovation and then walk you through the product opportunity we saw and how we brought something to market. I'm going to do a little risky something and switch to Calvin to walk through the specific open source technology and have a future form about 10 minutes into this presentation. The notion is, and maybe this isn't so hypothetical, that you have a business idea, it involves you have spatial, you think there's money making potential, and just as in the previous presentation you might want to make it really good but you want to contain costs. Where do you begin and in what choices do you have? How do you get this done? The reality is things like the cloud are drivers of innovation, small companies more easily than ever can bring something to market and if you're successful you can scale it, which is really important. Don't be afraid of success. The cloud is just a fundamental difference maker, not just to business but to the geospatial industry, to life, infrastructure as a service, platform as a service, software as a service are complete game changers. How is the cloud and open source connected? My assessment is many clouds are powered by open source. Google's cloud is millions of Linux powered servers and Google and other big companies contribute to the Linux development and write all the massively parallel stuff. So when you have infrastructure as a service where you can rent virtual machines for pennies per hour, the reality is that hardware costs are lower than ever. You don't need a data center, you don't need air conditioning, you need to put your credit card into Amazon or Google and you have a data center. At the same time, software is lower cost than ever. There's open source things that we're hearing all about today that you can just download and roll your own or there are new companies, Mapbox, Cardo that are powered by open source, you can pay for them but their cost is much lower than they would be otherwise because they too are powered by open source. So open source is even making commercial software lower cost. So there's this little flow chart. You have an idea. Your first choice is whose cloud you want to run it in, Amazon or Google. Do you want to use free software directly or an affordable platform, free software like hostess or an affordable platform like Cardo? You make your choice, you execute your idea, you have a great programming team and you have a new product and that's essentially what we did. And it's also essentially what a lot of other companies have done. This is the model that Cardo and Mapbox, Planet Labs, Fulcrum, they're running their companies out of a cloud, they're leveraging open source software to power their companies, their companies' products. So what was our idea? We wanted to build a high performance cloud based tile pyramid server essentially realizing the notion of imagery as a service. So where did this idea come from? Amongst being open source users, we're also Google partners and in North America there's tremendous Google imagery, both street views and satellite view which is actually flown by airplanes. And in North America Google is now selling their imagery. They have a new product called Candid imagery and you can actually buy the images themselves. And it's a really tremendous product. It's six inch pixels, 15 centimeter pixels. They fly every three years so that imagery is no older than three years and unlike Google Maps which everyone can continue to use for free, you have the right to bring the imagery into third party software. You can bring the imagery into QGIS. You can bring it into Esri, you can bring it into AutoCAD or Bentley. You have the right to download the imagery. You can do image analysis on it if you wanted. And very importantly you have the right to create derivative work. You can digitize on top of Google's imagery to create your own features. All of these things are prohibited by the free use of Google Maps, Google imagery through Google Maps or the free version of Google Maps API. So the trick is how do you deploy the imagery once someone buys it. Originally Google used their product Google Maps Engine to do this. However in December 2014 they announced the deprecation of Google Maps Engine. And they looked to their partners who worked with them on imagery and said, you create the serving technology. Hopefully you'll put it in our cloud. And that began our journey to develop what we now call as the GizaTow pyramid server. So how does it work notionally? Google has this big cloud and they have a competing product to Amazon Web Services. They call it Google Cloud Platform. It has the same thing, virtual machines, bucket storage. So when you buy Google imagery they create one bucket with tile pyramids in it. And they also put the original JPEG 2000 images. And so you basically have, depending on the size of your purchase, hundreds of gigabytes to dozens of terabytes worth of files in the cloud. So how do you get cloud based files into software like Ezra or QGIS or AutoCAD? It's where OGC services come in. Open standards, they've been around for a while. And even the commercial software packages like Ezra and Bentley have started supporting them and provide pretty good software. Pretty good software support for consuming the OGC service. So by choosing the OGC standard we've provided the gateway to get imagery out of the commercial cloud where there's very low cost for storage and into software. So this is a couple of Esri screenshots of how you find a WMTS service and this is Google imagery inside of ArcMap. So Calvin wrote this amazing program that does it. We're now at Brought It to Market and it delivered it to several of our customers. So what is it? It's a custom Node.js application that we manage the code of. It's installed on virtual machines as a small cluster, two to five machines depending on how many users have the imagery data set. And it really does one thing well. High performance, high degree of scalability as usage grows and really good compliant WMS and WMTS that plays nice with all the different software environments that have to consume this software. So in the end it looks like this and we're actually, again, part of the modern cloud is these clouds have lots of different parts. So we're using Google Cloud Storage for buckets, Google Compute Engine for virtual machines and we also log, and I'll talk a moment and a little bit about this, every request to the server is logged in there, big data as a service, big query. So we can do statistics. We know who's using the imagery, what parts of the world they're looking at in the imagery and we can report back to them on our usage. So why do we choose an open source approach for bringing this product to market? We obviously wanted to contain cost. We wanted to differentiate from Esri. We wanted to leverage existing tools that solve part of the problem and we also believe in giving back and thought that what we added would improve the open source community for the tools that we use. So I'm going to flip over to Calvin right now who will talk about how this thing is engineered. Hi. For the record, mine is actually not a pedacucha because I got the numbers wrong and it's actually a, it's 15 slides, 15 seconds each, so it's a hyper pedacucha. So it's Node.js. If you're not familiar with Node.js, Node.js is the heart of Google Chrome ripped out and put onto a server so you can just run IO-based applications very fast. It has a very good server setup called Express that has a great middleware system so you can just, there's a lot of tools out of the box and it's very good for stuff where you're waiting for disk not doing heavy computation. We have some client-side app components of it that's all built using BrowserFi which is a fantastic way of just having small bits of code that just get smushed together and a similar thing called less which is for CSS. So the actual Geospatial stuff is some middleware that I wrote and we open sourced that's for doing WMTS stuff. I winced every time Michael said compliant because it's impossible to write a compliant one of those that actually works for everybody because everybody does it their own horrible way. We use some map box stuff to actually stitch the stuff together. There's Abacolus is just give it a bounding box and a way to get tiles and it will give you a big old image and then we use MapNIC to convert between styles, between types of images and just end to stretch them. We used to use GraphicsMagic but that was so slow, GraphicsMagic changing the image size took longer than stitching them all together. So that was and then we put everything into BigQuery for the stats. It's the big day as a service. It's basically you cannot edit, you cannot delete, you can only add rows. You can never change them but it's the same speed if you have a gigabyte, if you have terabytes versus megabytes. So there was no real Google library for Node for using BigQuery when we started so I had to write my own. It is also open source and it is for Node.js and at the end of the day the WMS parts of it were the easy parts. Most of the hard problems were not actually geospatial problems. They were problems with managing users. Well, oh, I want to look at this problem with the stats and that kind of stuff. So originally we had a handle bar server-sided template, server-side templating app where it just sends static, fairly static data and uses HTML and then uses bootstaffers and styling. We ended up switching out the server-sided side app and making it more of a client-side app using React and Redux which are hot frameworks these days that the young ins use but it was very easy to use. We deploy using GKE which is the acronym used for Google Container Engine. The case from it's based on technology, Kubernetes, it makes deploying them to various places really easy. We use Postgres with PostGIS and Redux for the back end. Redux is technically a database but it is not really a database. It is more for caching and inter-process communication. And then we also had another geo issue which is somebody was like, we'd like some heat maps at a big query. Can we just get some heat maps? That's easy, right? It was not easy. We ended up using a Z-curve-based approach where we turned all the tiles into geo-hashes basically and then so you could do prefix queries and get all of the things underneath it. It's not open source because it's really gnarly and that's, but I guess we could. It's more like an embarrassing. You wouldn't want to look at it. Anyway, back to Michael. Thanks Calvin. I'll show a couple of pictures of the things that Calvin was referring to. So when we were done, one of the things we noticed which was really cool is like, gee, we built this in Google's cloud and we built it for Google imagery but there's nothing preventing us from serving any kind of imagery. And we bumped into a friend of a client of ours in New Zealand who had flown some drone imagery and wanted an affordable way of cloud-based serving, her known imagery. And she was an Amazon customer. And with a little work from Calvin to work on Amazon bucket security access, we used the exact same product to serve drone-based imagery out of Amazon's buckets. And by the way, her tiles were made by Google. They were made by Esri. And so we really have a flexible architecture that can serve any kind of imagery out of either of the two really major commercial clouds. So what are the benefits? One of them is the statistics. So again, very important are users' imageries expensive. Lots of different agencies use imagery. So being able to tell how much the subcontractors to the state use or the Department of Transportation or cities or counties gives transparency to the owner about who's taking advantage of the imagery. And that helps them get funding the next time around. It helps them explain the value that they're adding. And then again, Calvin's heat maps. The tricky part of the heat maps is the heat changes as you zoom in. So this is looking at the entire state of Utah. And this is probably Salt Lake City's most busy. But there's some surprises out in the hinterlands. And as you zoom in, this one area has been accessed over four million times in the last year. And as you zoom tighter in, a lot of the weird hot spots are road construction projects because the DOT is the second biggest user and is a big consumer of imagery. One of the other benefits is that people are sort of over the hump of having to download the imagery. It's expensive and then you have to store it. And just so people have gotten comfortable streaming music in the old days, you had to have all the M-pads on your iPod or on your device. Now people are very comfortable just streaming music when they need it. And what the SAAP show us is over the last year, people get grown and grown and grown while downloads have remained completely flat. So if you give them a good service to stream, they don't download. And that's very powerful and saves a lot of time. And storage costs is people moving these big files all around. So the other big benefit is cost. So this is Utah's data from June of 2016. So they're running four servers now. So it's $400 per month. There are medium sized Linux boxes. There's 46 terabytes. There are 200,000 plus square kilometers. 46 terabytes cost them about $1,500 a month. And then you also are charged egress costs, pulling the services out of the cloud. You charge them all bandwidth charge. They hold 700 gigabytes out during June. And that costs about $100. And so the cost per month is about $2,000. So they're serving $24,000 a year. They're serving imagery of 46 terabytes to hundreds of daily users for about $24,000 a year. And it turned out to be about a third of the cost of what they were doing previously, which was using ArcGIS online. So big cost benefits to having this kind of open approach and leveraging the low cost of the commercial cloud. So in conclusion, a small little company like ours, 35 people, can bring a product to market. Keys to success. We're being able to leverage the open source technology, all those frameworks, Bootstrap, Math, etc. They were out there. They leveraged them to our advantage. And they availability of low cost virtual hardware going from two machines to five machines, if we have someone who's big enough and needs 15 machines, we don't have to put that in our data center. Just turn on more machines, pay by the hour. And as Calvin said, we believe no one likes just takers in the open source community that if we're adding value, we should put it back out to the community. And Calvin is really good about doing that and has full support of our whole company for that. And I would be remiss to not mention that our hometown of Boston is hosting Boston 2017. Please consider coming. We will say a little more at the closing session on Friday show or video, but we'll try as hard to be as good as Bob and his friends so far. And both me and Calvin would be happy to answer any questions or demo the appliance. We can show you on the phones or on our laptops if you're interested in the conference hall. And for those of you who are planning bingo, feel free to just yell it out. If you need any words like geospatial or spheroid or ESRI, I'm trying to help out. Thank you very much. Thank you very much, Michael and Calvin. And thank you for showing a picture of Sebastian Faulma. Very polite. Yeah, are there any questions? In one of your slides you mentioned about you have the concept idea born and then you use a cloud and then open source software and then bring that product to the market. And you showed it's a very like smooth path, but I'm sure you had a difficult times. And is there anything that you want to share as one of the important lessons learned from your journey? Yeah, I mean, one is that the team, I intentionally showed our American football team from Boston who's very successful and has had a lot of continuity of our coach. So you really have to have a team who can execute on that idea. And in talent, on the team, in development, and Calvin's been really a strong advocate for keeping this simple. And the bullet we had saying this thing is designed to do one thing and one thing well. And we've had lots of people go, oh, can you do not geographic web mercator tiles? Can you do projected tiles and things like that? We're a traditionally consultancy where you say yes, we'll take your, if you want to give us money, we'll do whatever you want. We'll do whatever you would like if you give us money. But with the product, you know, it's different. And you have to be much, because it's not just the one customer, it's all the customers and the next customers. So you have to be, you know, you have to be a bit of an asshole that just says no, you may not have nice things. We give you this, you enjoy it. And if you'd like something else, we can get you something else. But this is this thing. Yeah, like focus and we do a lot of help to our customers to design websites, design software. And so we did go out with a type plan of what we wanted to bring to market and the talent to get it done. Thank you. Thanks. So just following on for the presentation yesterday about the Copernicus data. Do you have any plans to bring that into the platform as an open alternative to Google imagery? No. But if someone wanted to take some Copernicus data and create a top that they were interested in, I mean, we're not going to ever do image analysis or whatever. It's really for image display. But wanted to create backdrops from Copernicus database on tile pyramids. It would be very easy to do that. Yeah, we would love to take some of these money to make tile pyramids of Copernicus data that they could then put in the application. But, you know, that's not really what this is. It's a platform for serving. It's not a storehouse for imagery that's freely available. It's for people who have their own imagery and want to get it out to others. Okay, I think one more question. Hi. Great work. Is your WMS middleware? Is it Node.js? Or is it Open Source? Yes. Yes, npmjs.org slash WMS. Okay, thank you very much.
The cloud and open source software have fueled a wave of innovation that has enabled both large and small companies to bring products to market more easily and with less cost and friction than ever before. This talk will describe our journey to bringing such a new product to market. In 2014 Google began selling its high resolution imagery and purchasers received the data as large buckets of files deployed within Google’s Cloud Platform (GCP). This opened a requirement for high performance serving of that imagery via the Open Geospatial Consortium’s (OGC) WMS and WMTS standards. This talk will describe the process of a small company developing this image serving technology by both incorporating and contributing to open source and geo open source initiatives. The talk will describe the market opportunity for the new product as well as the business case that led us to choosing an open source approach even for something that is ultimately sold. The talk will also describe the Node.js technical approach that was chosen and the array of geo tools, such as Mapnik and PostGIS, and other open javascript frameworks (e.g. Bootstrap, Handlebars.js, etc.) that underpin the solution. The talk will also highlight our development team’s open source contributions back to projects and the community. The talk will conclude with a description of the lightweight server and its features that enable an “imagery as a service” business model that daily serves hundreds of users in Utah and Texas.
10.5446/20322 (DOI)
Okay, the next talk will be about command line geography and it's given by Eric Kofir. Please. Thanks. Hello, everyone. This talk is called command line geography. So it's a bit of an opposite direction from the previous talk. So I'm Eric. Hi. Hello. You can find me on GitHub and Twitter with this name, Eric. I'm currently living in Madrid and working as a front-end engineer at Visuality. Visuality is a consulting agency working mostly for NGOs. We are building mostly web-based products with maps most of the time and that has visualization. This is our biggest project to date. It's called Global Forest Watch. It's showing you the evolution of the tree cover of the Earth. Another one is NGOL map, which is an open data portal. So we have a web-based approach to GIS. Those are the main tools we use. Leaflet, Google Maps, D3, Cartot.js, Post-GIS. This is part of our tool chain, React and Redux are the fashionable tools as of now. And on the back-end, mostly Ruby and Rails. Also a bit of Python and Post-Gray, obviously. This one is nice. It's World of Impact. It's nice because it basically uses the whole stack, basically all the tools that I just mentioned. So it has a leaflet map with D3 overlays on top, a Torque layer, a Torque Cartot layer, Post-GIS database. This is another one using Torque from Cartot showing the tourism in Spain. And I need to mention that we are recruiting engineers mostly, but also data scientists. So let's talk if you want to talk. So my original thing is Frontend Engineering. I've only came recently to geography or GIS. But photos show that it was also childhood interest. So at some point I got interested in that and I bought a lot of books about GIS and maps and stuff like this. I haven't read all of them, probably not a lot. So I'm a Frontend engineer talking to hardcore GIS. So I feel a bit like an impostor. So sorry if I say anything really wrong. But we're building bridges here. So it's nice to have mixed experiences. And I'm going to be talking about developer-friendly tools to do not really GIS, actually, but more web mapping. So yeah, a bit like exactly like before, but the opposite, actually. I'm trying to use less and less grease in my workflow. And I also have a strong interest in tooling. And sometimes tooling got me more interested than the actual product. But I think that's a common engineer point. So here are the topics I'm going to talk about. I'm going to talk about the tools of the trade, meaning Car2DB, Atom, and the command line. We're going to see how we can use a starter key to quickly iterate and build web-based project for geography. I'm going to talk about how to interact with Car2DB on the command line, how to use Atom for web mapping. Geocoding is my favorite topic. I'm going to talk about this. And we'll finish with some MOG maps. So here are the tools we're going to use. Car2DB or Car2. I guess most of the people know Car2 as this, which is a web-based interface to do GIS. They just very recently redone their builder. It's called the builder now, so the online editor. If you don't know what Car2 is, you can go to their website and try to understand what it's about. It says that you can improve customer satisfaction by 15%. It may or may not appeal to you as an engineer or GIS professional, but Car2DB is absolutely brilliant because it's basically, it does everything. So technically, Car2DB is a platform as a service post-GIS instance. But not only that, but it goes beyond that because it has geocoding, routing, isochrones, base maps, it also has readily available open data sets. It has a set of REST APIs, which allows you to interact with this post-GIS instance. And on top of this is built the editor, which is called the builder. And we are using Car2 a lot at Visuality. For many reasons, it has really, it's easy access for front-end developers and for people that are not into GIS originally to allow them to do analysis and pretty advanced stuff. Also we use Car2DB a lot as a very simple back office because it allows you to synchronize, for example, spreadsheets on Google Drive, that kind of stuff. So it makes stuff really easy with clients, for example. And the builder is very powerful, but also very useful for prototyping before actually coding. So I used to work at Car2DB, I thought you would be worth mentioning. Yeah. And also a fun fact is that Visuality is originally where it all started. Car2DB was made to satisfy the needs of Visuality originally. Obviously, it's outgrown it a lot. Visuality is a small company, Car2 is slightly bigger. Then I'm going to talk about Atom. So Atom is a text editor basically in between a text editor and an ID, depending on how you see it. So it's sleek, looks good, it's very useful. But what interests me is that it's actually running a Chrome instance and you can basically hack it exactly the same way you do websites. It's a very interesting proposal for us front-end developers. And you can just edit the style sheets and it will put everything in Comix. So, I mean, I got excited by this, I don't know why. And the last thing is the terminal. I don't know, in French we have a saying that it's in all parts that you make the best soup. I mean, Bash is very old, obviously. But as I was saying, I'm getting more and more into using it instead of GUI's equivalence because it just makes you more efficient. So in more detail, terminal where I'm using itorm on Mac, ZS-H as a shell, and then everything that you can run inside it. And input monofont, which is the one that I'm using, it's pretty gorgeous fonts that I recommend using. So let's start with the command line. So the first thing you need is a sort of bootstrap thing to actually start a project, right? You need some kind of index.html, some bit of JavaScript. I'm still talking about web mapping, obviously. And one thing that makes us really good at visibility is that we actually have a dedicated exploration phase every time we start a project. This is something that goes into budgets, actually, which allows us to look at the data set the client is providing, sometimes an API. And also, it allows us to experiment with interactive stuff because web mapping and data visualization is a hard topic in terms of UX and UI. And we need to actually iterate quickly. And we always involve the client heavily into it, which means that we need to start stuff very quickly. And we need to bootstrap stuff efficiently. There is a tool called Yeoman, which is exactly intended for that. Basically Yeoman is a command line tool that allows you to generate some files based on some settings with an interactive invite. And Yeoman and Cartel. So Yeoman uses generators, a set of generators to generate, say, a bootstrap project with React or whatever. So there is one for CartelDB. And you will just install Yeoman and then the CartelDB generator, and then just run it and it will ask you a few questions. This maybe is the only good reason to use NPM to use NPM install globally. Usually you shouldn't, but here you don't have a project yet, so you don't have node modules, et cetera. Don't do sudo this. If you sudo this, you're getting something wrong. So how does it work? Just type your CartelDB and it will ask you a few questions about your project, mainly what is your project name, whether you want to create a guest for it automatically, and whether you want to transpile ES6 and CSS through Post-CSS. Your CartelDB username if you want to interact with CartelDB APIs and some libraries you might want to use. And then you have to select between a few templates. It will generate files for you. Then you type NPM start inside the project you just created. And it will do a few things. It will run a local server for you with some default standard maps already set up. It will have Live Reload, which is absolutely essential for productivity. Basically Live Reload is every time you change something, a file, the files will be watched and it will send something to the brochure so that it will load everything. ESLint is set up initially in the Eoman generator. So Linting in JavaScript is basically testing for the poor man. So we all know that testing is necessary, but we all know that sometimes you cannot do it. So Linting helps you catch the biggest, the more evident errors. It's essential. So in a nutshell, basically the idea for this generator is to have a nice balance between complexity and something too simple. So you don't want to start every project with depth, with technical depth in the worst case or in the best case with just a lot of code. So there is a balance to find here. And I think using NPM as a task runner instead of having a dedicated task runner makes things much more simple because you can, in your package.json file, you can just run commands and then just run them using NPM run and it will just use the shell to run stuff. So now that you have your project set up, how do you use Cart2DB on the command line? So first of all, this is all the services that are presented on the Cart2DB website. Some of them are front-end libraries. Another one is Cart2 CSS, just a language. They have something really cool called Data Observatory, which is making available open data sets. But basically we're interested in those four usual suspects. The SQL API, the Maps APIs, which allows you to generate URLs, patterned templates that you can use in raster layers. In Eflat, for example, the Import API allows you to import files to your PostGRAIN states online and the Data Services API which covers geocoding, routing, et cetera. So how do you interact with those services? Right now you have three possibilities. You can either use the Python client, which is sort of the official way to do. It's maintained inside Cart2DB and Cart2, sorry. Yeah, so it's the more evident choice. The Ruby library, unfortunately, is deprecated. And what I'm going to use is the Node.js API because it has direct command line access that is baked into it. And just to show you how it works, just type Cart2DB, pass your username, and just feed some SQL and stuff will happen. So for example, here, we're just selecting 10 countries of the world. And by default, it will give you some JSON output, including geometries, if I had selected geometries here. So this JSON format is the default Cart2 API format. But probably at some point, you're going to need to have more interest, more common formats. So CSV is one. Just by using the dash f option, you're going to be able to get CSV right here in the terminal. Or if you want to actually work with it, you can either send that to a file or you can pipe this to a CSV look, which is one of the tools of CSVKit, which is a set of tools to handle CSV in Python. And this one, for example, will nicely format a CSV table, a CSV look. There are many, many other tools. CSVStats is very interesting as well. We'll give you stats about all the columns on your CSV file. So basically, just Cart2DB, send some SQL, and pipe it through to CSV tools. So GeoJson is also available on command line. So type dash f, GeoJson, obviously you're not going to use that inside the terminal. But the guys at Mapbox did something very interesting called geojson.io. It's a very simple send box to build GeoJson online and see it on the map live. And they also have a command line tool. So you just pipe it through geojson.io and it will just open the website with here the countries that start with the ZEN. There are only two apparently. Authenticated queries. At some point, if you want to do write options, you're going to have to provide your API key. Otherwise, the Cart2 API is going to just refuse you to do anything. So you can provide it this way on the command line. You can also have a JSON file with your API key and your username and just provide it to the command line. And then last but not least, you can use the Cart2DB import API. So you have a shape file, for example, here of the New York subway. Just type this and it will open, it will import this to Cart2DB and open the table that has been created. Yeah. Also you can, instead of using a file, you can directly grab a URL from somewhere on the web. So your customer satisfaction is definitely going through the roof here. Now, how do you use Atom for GIS or for web mapping more precisely? First of all, Atom as an integrated package manager, it's very useful because you can basically run this and it will install very interesting plugins. I'm out of time, so I'm going to skip through this, but it's just a list of very interesting packages for Atom. Pigments is really, really cool. So I've been talking about Cart2DB APIs. What if we had something in Atom to just interact directly with those APIs? Yay! Just make a right click on some SQL. This is a simple usual leaflet setup and it will give you the result of this right into Atom. Also works with GeoJSON, so for example here, just generating GeoJSON and feed it directly to leaflets. And then you have this. Easy, with a simple SQL query. And SVG, it's also supported by the Cart2 API, so just select Belgium and get Belgium. Also File Import is implemented in this package, so you're going to be able to right click on your shape file and just upload it to Cart2DB. So just a very satisfaction is really like crazy now. And it's just working this way. I've just been doing this recently, so please do pull request and insult me because it doesn't have any tests. And last topic I'm going to talk about is GeoCoding. I love this. Cart2DB can geocode a lot of things and it can give you either polygons or points. Can geocode countries, IPs, zip codes, you name it. The magic behind this is mostly maps and then open street map obviously. So kudos to them. What's really cool in the Cart2DB SQL APIs is that you can just use SQL functions to, you can just use SQL functions to geocode stuff. It's really cool. It means that you can do what we did before with SQL queries for geocoding as well. So for example here we're just getting Beijing and piping this to geojason.io and then you have Beijing. Street level works as well. The same principle. I also made a little tool called Cart2DB SQL which will turn just simple string of locations into SQL queries because you have one SQL function per type of entity. So street level countries, et cetera. So here it will kind of try to detect what it is and just put the right SQL function. So if you type an IP, it will guess that it's an IP with some simple reg X. So you can use this in a sub shell using the, this kind of course, the back ticks to pipe to feed this to Cart2DB, node client. And yeah, so you can do this. And it's really cool because you can, at the same time, geocode Paris, Paris in Denmark, the zip code of Paris, Texas and the IP of the Kiribati Islands which happened to have a place called Paris. It's an abandoned settlement apparently. So just give a bunch of stuff and it will give you coordinates. That's just magical. And it's also implemented in the Atom package. So just right click on Plaza Mayor Madrid and you get your leaflet map centered in Plaza Mayor which is such a beautiful place. You should go to Madrid. It's awesome. I'm going to finish with this. I'm sorry I'm a bit late. But there is a tool called GeoJSON to ASCII that will exactly do this, transform GeoJSON to ASCII maps. It's so cool. It's not mine and it's really, I just love this. And so you can do ASCII maps but you can also do emoji maps. And just piping some cartard B commands you can turn SQL queries into this. So it's a map of the world and this mining faces how the countries will have a GDP to growth to 2%. So I give, I try to give the URL of this presentation later so you can get the code and the libraries and all that stuff. Sure we are hiring at Visualities so if you're an engineer, data scientist, if you're looking for an internship please talk to us. And that was it. Sorry for being late. Thanks. So, let's get down to it now. Any questions? Anybody? Come on. It's okay. Yeah, take some time. To ask a question out of curiosity, how big is the exploration phase when you have a project in terms of percentage or the time you dedicate within a project? It's hard to reply, but I guess it heavily depends on the project. The quality of the data sets we receive, for example, is very variable. Sometimes we don't have any data set. Sometimes it's just very big raster files. Sometimes it's a neatly prepared API. So for the data exploration part, that's very variable. And then we have projects that look a lot like other projects. I mean, web mapping projects tend to look like other ones. And some projects are just much more difficult, much more complex. It depends heavily, but I can't give you a number. Okay, thanks. I have one here. Please do. So when you're doing things on the command line and you're piping a lot, you can have up to 15 pipes or something like that. How do you deal with that on scalability? When you're analyzing the code, you know, a couple of weeks later, you don't know what's going on, actually. So there are many ways to make this better. Here I've been just typing demoing stuff on the terminal, but obviously you can use shell files and just run shell files. It wouldn't be my first choice because I really like JavaScript and I preferably would do note scripts instead. But I agree that it's not scalable. The main reason of doing stuff coding instead of using WIS is the repeatability of stuff. If you do stuff on an interface, it's going to be very hard to repeat the process and I don't know, integrate an updated version of a dataset, for example. So with this stuff, just put that on a shell and run it again. So scalability is actually better using the command line than WIS in general. That's a very vague statement. I'm assuming one of the other advantages is that generally GUIs take up a lot of memory and resources. Have you seen any major benefits in terms of performance with regards to... Well, usually it's not the... Sorry, that was the end of the question. Usually it's not critical at this point because when you're running here, obviously you're not running it in the final product, it's just stuff you do beforehand. And so that question will arise when you use large datasets, for example. But honestly, it never has truly been a problem, especially since you're using Cartodyby. So basically it's Cartodyby's problem. That's also one good thing about Cartodyby is that it scales. They're supposed to, that's what you pay for if you pay for. So usually it's not a problem, not at this stage. I mean, the memory issues, the performance issues, we have them on the front end all the time. But here, not really. I also want to mention that, I forgot to mention that, but Cartodyby is open source, which is also a very good reason to choose it. Hi. What is your experience with Atom? Do you use it for general coding? And for example, if you are using that for GIS, what about some bigger files? Did you have some? Yeah. Yeah, good question. Atom is extremely bad with big files. And I don't know, I haven't looked into it. I don't know why, but that's the reason I still have a paid copy of Sublime Text on the machine, because it handles big files much better. Obviously, we're not talking about binary files here, but like a big ass 2GB CSV, just don't open that into Atom, because it will just send your computer to space. But Sublime handles that pretty well, and yeah, that kind of sucks. I don't have any. I should look it up. I don't know if they are trying to fix this, but basically, obviously, I mean, the fact that it's just some code running in a Chrome instance is limiting. I wish I could give you more technical answers. So, except for big files, do you have a good example? Yeah, Atom is really good. Yeah, yeah, yeah. It's very extensible. Basically, everything in Atom is a plug-in. Everything. There is no core functionality, or it's very, very small. So, it makes it very agile, and the ecosystem is really big. So, one thing that did a good thing with doing that kind of talks is that it kind of actually pushes you to go look into the ecosystem of the tools you are using, because you can be satisfied with your workflow and not realize there's something better out there. But Atom is like infinite, because there is a plug-in for everything, a package for everything. So, this is for me, this is a sufficient reason. And the fact that you can customize it and do that kind of stuff, just doing some JavaScript and CSS for me is a self. It's enough. Okay. Thank you once again. Thank you.
The keyboard is the new compass ! In this entertaining session, we will see how our beloved shell can fit into the workflow of the modern cartographer in the most surprising ways, and we will generate maps in the least expected places (your terminal, your desktop, your IDE...) analyse and visualise geo data with expressive SQL one-liners ; manipulate file formats with shell I/O and useful libraries ; geocode with the blink of an eye (or with your voice) ; make ASCII and emoji maps ; transform Atom into a supercharged geo IDE ; set up the perfect web mapping project environment in seconds ; and many more ! The CartoDB SQL APIs, along with the CartoDB Node client, SQL and PostGIS, plus a host of other open source libraries (GDAL, CSVKit, Yeoman...), will be showcased as the "survival kit" for the hurried but demanding mapper.
10.5446/20317 (DOI)
All right. All right, for our last presentation, we have Jonas Eber from the University of Gena here in Germany, in the Department of Earth Observation, appropriately. Thank you, Eddie, for the introduction. And welcome to my presentation about the geoprocessing services for Earth observation time series data access and analysis. At our department for Earth observation, we are dealing a lot with time series data and of course with time series analysis. And of course, the question is how to access these Earth observation time series data and how to help scientists but also normal users to get access to the data and make some analysis or execute some analysis services. So as you may have heard in the first presentation in this session, we have a bunch of Earth observation satellites in orbit and they are delivering data. And data providers are usually providing access to the data in databases, in file servers, some providing web-based applications, some are providing certain services, but they are very different. So some are file servers, some are services. And in general, we want to have a web-based tool or a web service or we want to have a mobile app where you can access these data sets and also access some analysis services. And this is when the question comes and what is necessary when developing an application based on spatial time series data. And we need to integrate in the most cases a further component like for example some web services to make data discovery but also data access and data analysis more usable for these kind of applications or also for users. So and this is why we are talking about geo-processing service because we can use them to access data and to analyze data sets and also to discover what kind of data is available. So first of all, let me introduce our use case we have in our department. So we are working a lot with vegetation time series data. You will see here a plot of a single pixel with the enhanced vegetation index. So an information about the vegetation, the utility and this is from the mode ascensor from 2000 to 2015 and we clearly see a negative trend of the vegetation index and we also see a clear change in time of the vegetation index. It has higher values from 2000 to 2010 and afterwards we have lower values. But beside just plotting the data sets, the science community has developed different algorithms to make it more clear or to extract some information. Like you can see here on the top right plot, this is a breakpoint analysis called BFAST and you clearly can identify the change in the seasonality of the signal. So we can directly extract the time when the change has happened and also for this is a spatial trend plot, the brown areas indicates a negative trend of the vegetation and the green areas indicates a positive trend. So these are algorithms that have been developed from the science community. They are described for research papers and everybody can use them. They are open source and they are available and you just need the data, the first thing and then you need to set up or use the software. But we have a bunch of observation satellites in orbit as I told you before. There is another mode ascensor from 2000 until now with daily coverage. The data access is very diverse like you have access to file servers but you can also use some web interface. You can use a web portal but using a web portal is not a good thing when you want to automate access. The original data format is in HDF. When talking about Landsat, the original data format is GeoTIF. They have some portals you can use Google or Amazon to access some data sets. With the new data sets, there are new data formats coming up like JPEG 2000 for the optical data set and they all have some different ways to access the data set like an open search interface or you can also use Google or Amazon or web portal. But the problem is we have different access possibilities, different file formats and no standardized services to integrate these data sets in an own application form, for example. So we need a solution and every scientist needs a solution for this to not deal with all these different file formats and these different data access services. And also for the data processing, at the moment every user has to search for the data, to request the data sets, to download the data set and to process the data set individually. So it would be more better to have, I call this an ideal situation, to have a kind of a middleware between the user and the data providers and provide services to automate all these things and this is possible as you may have seen in the first presentation by Matt with this Lounsart Jutl software. So what we need is easy to use web services to make observation times, data access and analysis more affordable and more usable by all kinds of users. And so what we have developed and integrated is this kind of middleware software. At the bottom we have these different data providers like NASA, USGS, ESA, Google, Amazon and on the top we have different clients so you could also use your own or you can also build your own client and use our services. And within the middleware we have different tools and each of the tools are exposed as geo processing services either for the data discovery, we will see some examples later on but also for the data integration. So the user can add a point or polygon geometry as an input and state the name of the data set and this is a web processing service that integrates the data in the common data format and based on these we have these scientific algorithms I have shown at the beginning. And all the data also is provided with OGT compliance services so that clients like a web portal or mobile app can make use of these services. So these services are based also on open source software. We are using for the data processing the PiWPS. It's a standard comply software written in Python for the OGT web processing service but as I said before we are also providing the data we have integrated into the middleware with OGT compliance services. So we use the IST SOS for a Python based library for Zenza observation services so we use this for single pixel time series but we have also MAP server or geo servers also possible for serving a WMS or also a WFS for the analysis outputs and the catalog service to also provide some metadata because if you do anything or if you process the data it's also important to create new metadata so that users knew what has been done. So let me talk a little bit or one just I have two slides about the OGT web processing and specifications and generally of course we have different methods like the get capabilities to show what kind of processes are available and then we have a describe process to identify what kind of inputs and outputs are necessary within the process and with the execute statement we can also execute the process we can do this in a synchronous way but also in an asynchronous way so that the client need to pull the server about the status so with this we can implement a process that also takes some hours or some days or even longer and within the each process we can nearly do everything so with the PIDAP WPS we are using you can integrate all kinds of processing tools that are available within Python or you can also use of course a command line in utilities that are executed from the Python interpreter. So at the moment the 0.1 is the most used specification really fresh I think a few months ago there was this new 0.2 specification what was published with further methods like get status, get result and dismiss but these methods are still available. So here I have shown some examples so at the top we have our WPS endpoint then we have one request is the get capabilities one the describe process with the process identifier and the third one is the execute statement so it is really easy to execute a WPS service as an example here we have just the process identifier and the data input we have the data set name and point so and the x point and the y point and that's all and with this we can execute the WPS. There are several open source software that provide the WPS specifications like 52 North WPS based on Java, degree WPS also Java based, the zoo WPS C base but also with additional supported process languages the pi WPS we are using but also Geo server has an extension for WPS. So let me show you some examples about the data discovery, the data access and the data analysis so this is first an example from a process to discover what kind of sensing one data set are available in a specific area of interest so this is the polygon and the one on text format then you can filter from the product and additionally what we have integrated is a minimum overlap so we are only in this request we are only interested in the central one scenes that have a minimal overlap of 70% with the area of interest and the output can be quite different what we have integrated is a CSV format so that every programming language can easily access this file with the file identifier with the calculated overlap, download link and the geometry of the central one scene this can be extended or we can also just provide the download links for example and so this is a way to make data discovery more usable for the users because we are only in need to execute the process with a polygon geometry and the minimal overlap for example. We have further processes for data access this is an example for the modus vegetation times use data access for a single pixel as I shown before and the output as we have several outputs like directly plotted PNG image but also a CSV file and as I said before the CSV file is also exposed as a sensor observation service and additionally what we have also provide as an output a unique identifier we can use this unique identifier later on to make our analysis form for example and this is nearly the same for if you want to extract an area and not only a single pixel an area of interest again here the input of the geometry input and for this we have extracted the data sets and extracted the data sets according to the geometry and stored the data in a common output format as you can see and see it here so for each data single geotif but we have also a multi-band file so the users can very easily use this or download the data sets also and use this in their own programming library for example. I said before we have a unique identifier for each of the data integration processes this has been introduced to reference the data sets that is still available on our server and to make or then to execute our analysis because then we can just provide the unique identifier from the pre-executed WPS service and then say okay please make this breakpoint analysis and there are further options for different parameters so every parameter that is available within this algorithm is also exposed or can be used as a data input and this will be the output one PNG image but also the data behind that plot and this is the same for the green-brown trend analysis. So this is all based on a Python library we are developing at the moment it's called pyum so this is currently in development but this is available with the current state on my GitHub account. It is mainly for data discovery, data integration and at the moment data analysis will be integrated very soon we have different data sets available so a lot of modest products are here and you can use the software on your own server for example and here are some examples how to use it for example for data ingestion and behind this method is then the complete download of the data sets from another server for example and also the data instruction. We have also a link to the Google Earth Engine so if you have an account for Google Earth Engine you can also use Google Earth Engine as a data source and this might be faster depending on the area of interest for data download for example. We have a use case, this is an observation monitor and this is a web-based tool and a mobile application where we are using these services so users can just go to the web portal for example draw a point or polygon and then the user you can integrate the data, you can make some analysis without any data processing at all and at the end you can also download the data sets to have it offline on your computer and to make further analysis. As I said before we have this mobile application so you can go into the field with the GPS location for example you can directly extract the 15 years of modest vegetation data sets and we are also executing these time series analysis like the breakpoint detection and the trend detection and this is also here we also benefit using WPS so we can use existing JavaScript clients for example and just asking or requesting our WPS and here is also a link to the chain of this processes so the first process for data access and the second process for data analysis and the data is then sent back to the mobile device. So let me conclude so WPS is a standard design web service specification for all kinds of processing tasks not only for geo processing services you can also use them for administrative purposes for example but what is important is that you can also use a WPS for data access and data discovery for example to provide a very user friendly way for data discovery and data access because when accessing data as I shown before there are a lot of steps included like download like clipping to the area of interest and maybe also a cloud mask a quality masking but further research is needed to harmonize input and output descriptions if you have distributed WPS or if you want to use a WPS from another organization and of course it will also be very important if data providers also can provide more simple services because we are a research institute so our goal is to make research to test new ways for example of data access but we are not the organization who will provide these services forever with a big infrastructure for example. So some words about our future work we want to integrate further processes and analysis tools for Lansard and Sentinel for example but also have the possibility to implement easy to use WPS API so that users are able to integrate their own processes for example by providing a Uptown Notebox or with Python and also that the users are able to directly access the data that is on the server and make some analysis for run and the second point is also important that data and processing service need to be linked to each other to automatically find input data for processing services and why whether so if I have a portal for example you may know the Geo's portal from the Group on Earth Observations there are tons of metadata available but if you find a data set there is in most cases not a link to other processing services and so on. So we hope to have this kind of link in the future and with that I would like to thank you for your attention. Thank you very much, Jones. Questions? How many satellites are in space? Any, yes, hang on. Time series questions. I saw in your software for example you use PyCSW. The question is I find at least the way you use WPS usually I have seen most people using it for processing and you mentioned you are using it for data access or data discovery. The question is PyCSW like a catalog service it also already provides an endpoint for data discovery so with a Geo extension you can also specify the bounding box and so it's kind of difficult to understand why you went for a WPS approach basically when a catalog can provide you this endpoint with the bounding box option time space kind of analysis so you get the results and even you get the results in RSS format or JSON format so you can write your client according to the result set. So what is the kind of rational to go for WPS to use WPS for data discovery? So our goal was to, so if you have a catalog service so a CSW compliant catalog service in general then you're only serving the metadata with possibilities to filter the data but what we are doing also in my example in this example we are also processing the area of interest that the user has given as an input and calculate for example the overlap between the satellite scene and the bounding box or the polygon that the user has given as an input so the first point is we are providing some processing functionality within the data discovery and the second one is to provide also the different or easy to use file formats. Of course we could also provide our S feed as an output but in general the user wants direct access then for example to the download links so we are using this CSV format to make it easier for users or just a text plain format with just the links to the scenes. So if I want to use these processors for my work for example because I'm building a front end application and right now I'm using, I'm directly accessing the catalog where it's endpoint. If I want to use this because I see that you add some value to the search itself so that you can format so are these processors itself or are they available in the public domain for us to use the WPS processors itself which does this data discovery. For example I imagine you take this input parameters and probably you will construct a query to the catalog and get the results and reformat the results within that. So are these processors are available for in the public domain to use? Yeah, so the processors are online available for public use but as I said at the end we are only a research institute so we can provide access to the service a little bit but not for forever but all these processors are in the programming code behind. And at the moment not available as open source or online but we will integrate all these processors within the Pi UAM library so you could easily set it up on your own. Yeah, thanks. Is such a request quick to get the answer? I mean do you use the WPS 2.0 in the asynchronous functions or not? You can use this also as an asynchronous process. At the moment we haven't optimized for performance so what is behind here is a direct access then to the ESA interface so if the ESA interface is slow then our service is also slow because we haven't optimized for performance so if you have an interest to provide such performance optimized way then would be better to harvest the demeter data sets and make your queries offline and not connected to the ESA tool or maybe wait until Matt has provided the send in a utility as he does for Lancet. That was the question. How fast are your services, your WPS services and which is your capacity in terms of cores, RAM, memory, etc. How fast is a question what kind of data or access or data analysis you are executing. So for data access we are using Google Earth Engine at the background so for a single pixel it is fast but it could be better I guess if you have a larger area of interest then it could take some time. Our infrastructure at the moment is not very big. We have at the moment for this or the Earth observation money to where all the services are also exposed as just a single server but we want to improve this. We have access to a cluster computer so we like to improve this in the next two years but again the focus within this work was not focused on the performance, increasing performance but showing the possibilities of a WPS. All right. We are at the time so thank you, Jonas. Thank you all for attending and we are on break now and then we will see you later on this afternoon. Thank you.
Earth Observation time-series data are valuable information to monitor the change of the environment. But access to data and the execution of analysis tools are often time-consuming tasks and data processing knowledge is required. In order to allow user-friendly applications to be built, tools are needed to simplify the access to data archives and the analysis of such time-series data. In this work, web services for accessing and analyzing MODIS, Landsat, and Sentinel time-series data have been developed based on the Web Processing Service specification of the Open Geospatial Consortium and made available within the Earth Observation Monitor framework. The Python library "pyEOM" has been developed to combine access and analysis tools for Earth Observation time-series data. Algorithms developed to analyze vegetation changes are provided as web-based processing services in connection to the prior developed access services as well. Using the services developed, users only need to provide the geometry and the name of the dataset the user is interested in; any processing is done by the web service. The services and applications (web and mobile) are based on geospatial open source software.
10.5446/20312 (DOI)
It is better. Yes. My name is Carmen and I want to show you a map client where we get Sentinel to data. I don't have to introduce Copernicus anymore. Andrea Faisback did it in the opening session. And let's start. First I work at Mundialis and we are located at Bonn. So we didn't have to travel as far as you probably. And you can see here what we are doing. And we do the Mundialis Gambia and we also have another project. It is called Mundialis Art. And you can see it at Mundialis.de and it presents how beautiful the earth is and the imagery. So the Copernicus Broadjam provides many data. They have services, six different ones. But there is also raw data and normal users might not know how to process them. So we found a way to download it. They provide an app you can request and you can filter and search and we use this one to get the data that we like. And also we provide the preview images for all the Sentinel to data from the Copernicus Broadjam and we store them in our database. Only the bounding boxes and metadata for it. And then there is another open source tool. It is called CentroCore. This does the atmospheric correction for Sentinel to data. And this helps a lot because we can use it. And also there was a major release for another version. And before it took eight hours to process one scene, about seven gigabytes, just to the atmospheric correction. And with the newer version it is possible to do sub-scenes at the same time so it's much quicker. And then we do some algorithms with GDAL and also with Kraskis. And then we take some from the raw data and you can choose what kind of process you like to have. And then it gets processed. And at the end you will get an image out which is optimized for the web. And then we will publish it with GeoServer. We also have different processing chains for GIS because for the web we make it very small so that it can be displayed quick. And we also have different processing chains but they are not available in the web client at the moment. So what happens when the data is at GeoServer? It is put into a web client. And this web client is built with open source software as well. It contains basics, open layer 3, GeoX and XJS. And now let's look at the client. It is here. And there you can already see some of the preview images. We don't have such a smart solution as they presented before. We show all the preview images but we limited a little bit for the user because you can't filter and see all of them at the same time. And these are the paths that the satellite took probably yesterday. And here you can filter first time. You can see a time range and then it loads. And then you see much more preview images. And when you can also change the cloud coverage. So you see now when you go for more cloud coverage, the time gets less. When you go to zero, you are allowed to take the whole range of the whole time because it's not so many cloud free images at the same time. And then you can see and if you click on it, then you see the cloud percentage. It's zero because I choose zero. And then you can get all the information. And this is for the one spot that I clicked on, all the images which are available from this satellite. And then you get some information like the name, the UUID, the size, you can see here it's six gigabytes. And the date. Okay and then I wanted to show you processing but as I said before it takes a lot of time. So I prepared ID. So you can also filter by ID if you know what you want. It is possible. Search. So I switch the preview layer off. I have my search result here. And then you can order a process scene. I will choose a composite. You can also choose some indices, for example, the normal life difference vegetation index. And for the composites, we did some before. Like this is true color composite. Take some of the bands for Sentinel-2 data. And color infrared composite is also very popular. And we did some more. And I will order this one. And then it sends. And then at one point you should get a success message. Okay let me show you more features of the client. You can also see some of the footprints here in the client which are already processed because each one which is processed will be published in the client. And for example, we process Buenos Aires. And then you can slide through all the composites which are already processed. And also here you can see some of the indices we calculated. And this is the NDVI layer. We don't have it for every footprint but for some and it's growing. There it is. And normally this will take hours but for you to see what would happen. I prepared a preview. This will be the email you will get with a link for the web service at the moment only possible to view it in the client. And you will get the link like this. And then when you open it you go directly to the scene that you ordered in the composite. Do I have some time left? Oh okay. Then I was very quick. Yes. Maybe we can start earlier with the questions or I can show some of the features of the client. For example, if you look at the composites and you see they are very beautiful and you can have a look at them and on the other hand for the indices you can find out and get value out of the data. Yeah. And yeah. Okay. Let's start with the questions. Yeah, that's all Sentinel 2. And no they don't. They are only for the land. Which areas? It is. Yes. Yes. Can you show them by area? Yes. You can see it here. And when you. Yes, but they are only for the preview images and you can choose between any and when you select the data then you only get the result in this box. So you have to choose one by one at the moment. Yeah. When you click the order button then the job will be sent to a server and we have, we are always looking for server architecture for power and we have somebody who we can calculate at the moment for one week until August. So now you can order products and after one week then we talk to them and we see. So if you know how to get hardware power then we would be very happy. And then we have one server which does the, which receives all the jobs and it sends to different nodes. So we already set it up some servers where everything is, yes, where everything is installed. No, no, it's server based. So I see that you can search for data. But do you use some kind of OGC standards? For example, for catalog, you have CSWP for processing LWBS. So you make use of these standard complaints of, use the real standards to make your processing call or to research for the data. The OGC standards we only use for publishing the web services view, but we don't, yeah, not a catalog service yet. No, no. I see that you have built this platform based on open source on that. But I imagine that while building a new, maybe like you want to print equal or some code to do there together, then you have built this platform. Let's see. Yeah. So the question is, are you kind of also making open source even this blue code, the code that combines all this open source? I haven't done it yet. Probably. And you written download data from the central side up because it is not the same way. Yes. Yes. Yeah. Yeah. That's as it is for now because we can store. Yes. Yes, which is the API up from Isiah. Yes, we haven't implemented it yet. But with the email you get, we want to provide a link where you can download the image as well, probably as a WMS. Sorry. As we use the server, you can use any form of the geo server supports. We only have the predefined ones, which there was a form when you can click on the scene. I can show you. Like we have composites, predefined and also indices. And at the moment we have the vegetation index and some others. But at the moment it's not possible to put your own algorithm to it. It's not yet open. And Kraskis, mostly Kraskis, yes. And a little bit of GDAL as well. And the web interface. Small comments. You were talking about the atmospheric correction. The product, the whole product, it's in the interface is the result of the send to power processing. Or is it? Is it direct? This one? No, this one is a composite. But I was not sure that how you apply the send to power correction to all. If they are. If one scene is ordered, then we apply it to all of them. We have another client for it, but it's not in this one. But if you like, I can show it. This is just a demo client. I think sorry, the data is lost somewhere. But here it would be possible to slide through the time. Two filters, yes. For the preview images. Okay, any more questions? Okay.
With the Copernicus programme of the European Union everybody can access remote sensing data produced by the so called "Sentinels", the satellites designed for observing the earth from space. The data can be accessed in raw state or via Copernicus Services which are dedicated to a certain topic. But if you would like to extract certain information, you need to process the raw data. How would it be possible to use open source software to process the raw data and then make it available for further use, e.g. in the web by using open geospatial web standards? This talk presents a webmapping client containing footprints of all currently available Sentinel 2A scenes which you can filter and select and send a job which processes this scene including download, atmospheric correction and several image processing algorithms (e.g. NDVI). When the job is done the processed scene will be loaded in the web client, published as an OGC web service which makes it reusable elsewhere. The client is built using OpenLayers3, ExtJS6, GeoExt3 and BasiGX. The processing is done with sen2cor, GDAL and GRASS GIS. The product is published with GeoServer.
10.5446/20305 (DOI)
So, Andrea is going to present some work on WPS, which actually started back then in that code sprint. Yes. You were working on that, yeah. And SQL views. We have 20 minutes, then five minutes for presentations. Okay, so, processing data in GeoServer with WPS on SQL views. As you already know, my name is Andre Aime. I work for Geosolutions. It's a company based in Italy. We provide support for GeoServer, GeoTools, GeoNetwork and so on. And not just support for use, we are actually core developers in each and every of these projects. Meaning, we can push changes into them and improve them, and we do so regularly. So today's presentation is about, well, WPS and web mapping. So many years ago, doing web mapping basically meant to push a map on the net, do some styling and maybe some editing, maybe some PDF printing, and that was most, all of it. Since a few years ago, it's almost impossible for Geosolutions to make a web application without having some processing into it. Most of the new applications we make are not just mere visualization and data instruction tools, but they are actually providing some capability in terms of processing. So this presentation is about showing how we use GeoServer WPS to power several types of applications. So first, some quick reminders about WPS. WPS stands for Web Processing Service. It's the OGC way to expose for a change, not data, but processes to the web. And the web can then provide the data. The WPS does the computation and provides back results. Simple example here, applying a buffer to a line. I have the line which is an input. I invoke a buffer asking for a distance of two, whatever, two meters maybe, and I get back a buffer geometry. This is sort of the simplest example I can think that's also Geospatial. WPS can do also non-Geospatial calls like I could write an L award process if I want to, but wouldn't be very meaningful. WPS is one of the few OGC protocols that supports asynchronous requests, which is very important for a number of cases, and we are going to see some of them. So in most OGC protocols, you make a synchronous request. You send an HTTP request to the server and you wait for the request to come back. When it comes back, you get the answer, period, which is nice. It's simple. It doesn't work if you have to wait for minutes to get the request back because the HTTP connection is going to rot in the meantime, probably get severed. In a synchronous, you send a request. You get back right away a response that says, okay, I took care of that request. I'm going to process it and here is a URL that you can pull to verify how far along I am. Maybe you pull it. It's queued. It's queued. It started 10%, 50%, 90%. Oh, look, here is the result. The common WPS setup, well, the basic one assumes that the WPS has no data, no local data, and it's going to fetch it from other OGC services or from random HTTP server calls. It's going to fetch the data, process it, and then return it back to the client, which is a sort of limiting if you have to play with the very large amounts of data. In GUServer, the WPS is integrated in an ecosystem. GUServer has its own local layers, has its own local services, and so on. The WPS can play in this ecosystem. So in GUServer, the WPS, you can fetch data from local layers just by using their names instead of passing around the data. There is a nice integration with WMS with rendering transformations, which are a way to put in your style a directive that will call to a WPS process before rendering the data, so you can transform the data on the fly. One example, contour extraction. We have an integration with the GUServer UI, so we have a demo page to try out the WPS request, and so on. So we got a bit more connections in our WPS than normal. This is the demo request builder. I'm not going to spend time on it. It's not particularly interesting. Rendering transformation, as I said, one example of, as I said, applying a transformation on the fly while we render stuff so that we don't have to keep 10, 15, 100 versions of the same data processed in a different way. We just always do it on the fly. The rendering transformation are particularly optimized in such a way that they only process the area that's visible in the current get map request and only at the resolution requested. So we never work at the native resolution. If you are looking at a giant digital elevation model of the entire world at 10 meters, when you look at it, you are looking at it at, I don't know, 30 kilometers per pixel, and that's the resolution we use to extract the contour line for that zoom level. That makes for fast extraction. It gives us the best of both worlds. So before we dive into WPS, let's take one step back to the processing you don't always need the WPS. There are spatial DBMSs that you can use. Spatial DBMSs such as post-GIS can do computation on the fly in SQL. It's pretty efficient. It has all the data locally. It can do several type of computations. So why not leverage it? And the answer is, well, if it fits your purpose, do it. In GeoServer, we have a notion of parametric SQL view. So the normal SQL view would be that instead of pointing GeoServer to a table, you give GeoServer an SQL statement to run as the data source. And with that, you can already do some processing. What makes it more interesting is that in that SQL, I can embed parameters that will come from the client, from the web, to control how the computation is done. Of course, you have to protect yourself against SQL injection attacks. So we have some validation going on. But generally speaking, if your data is only in a spatial database and you can express your processing as a query, maybe with some parameters to control it, by all means, do your processing this way. It's probably one of the most efficient ways to do it. It's not always the best. I'm going to show you some cases in which different approaches are required. This is an example of a query that builds using a steam acline lines out of sets of points over time. So let's have a look at some examples. Some applications that we built, the GeoSolutions that are using either WPS or SQL views or both. First example, the soil monitor. It's a pure local WPS. So the soil monitor, first let me take a step back. In WPS, processes are provided with your server. We have like 80 simple processes already available. But you can write your own and plug your own. And this is what we normally do when we have a specific customer project. We build our own WPS processes. In this case, we had to compute at the Italy level some indices, some processing of the soil ceiling and a change of matrix in terms of land use. So this is a heavy raster computation. To accelerate to this one, we use the J-Cuda to integrate the GeoServer WPS with the graphic card and speed up the computation. Doing it is possible. Doing it is often not trivial because you basically have to take an entire large image, throw it at the graphic card, and make sure that you do all the steps of the computation within the graphic card before coming back. If you try to start to do it back and forth and doing it in steps, the graphic card is actually going to be slower. But if you do it right, it's fast. So we had to compute a number of soil ceiling algorithms, all calculated with J-Cuda. This is an example of a change of matrix. So I have two raster maps with the land use in two different years. And the matrix shows me how much of the water bodies became agricultural land and so on. Of course, I got a bunch of zeros, but if I look at agricultural areas, then I can see that some of them became artificial surfaces over time. And if I choose a particular type of soil, I can also build a chart of where the land, of how the land transformed over time in other types. So all of this is done via WPS, as I said, most of the heavy computation is done with J-Cuda. We do also some bits in Java to perform the final aggregation and visualization. All these processes are running asynchronously, because even with J-Cuda, they take time. So in the client, we also built a visualization of the processes that are running when they started, how far along they are, and so on. So you can track each computation. Another example, completely different, this one. This one is pure SQL user, so not WPS involved. It's the Tuna Atlas from FAO. In this case, we have a large database. Each cell contains the catchment of tunas classified by species and by the technique used to fish them. And we have an historical perspective on this data, so we have a time dimension, quarter by quarter for a number of years. And with this user interface, you can build a map, choosing, filtering the type of fish, the type of fishing technique, and so on, and get a distribution map as a result. What powers this map is a very simple query that we are running in Oracle, in this case. And as you can see, there are a few parameters in it, they are the ones between percentage signs. One of them is the actual aggregation function, that op percentage op, which allows us to do average sum on the fly. Of course, we define the regular expressions to make sure that we cannot be attacked. So for example, for the op, the default is sum, and the regular expression says it's either AVG or sum. And it works fine, it's processing on the fly without WPS. In this project, we also created the animator tool, which allows us to specify a certain number of years and quarters and get back an animated gif of these maps. So it shows evolution over time. So this is one example of the URL calling the animator. We are specifying the view param here TA, and then its values, and each value will generate a frame. This is another interesting example, this is another pure WPS example, in which case, in this case, we are doing just downloading. So you might say, wait, wait a second, WFS and WCS are meant for raw data extraction, so why are you not using those protocols? Well for a very simple reason, these extractions can take time, because we are talking about extracting hundreds of megabytes of data, if not more, so we cannot just wait for the WCS or the WPS to do their merry job and get us back the data, the HTTP request would expire. So instead we have created a process and we leverage WPS asynchronous calls to do the large extraction and get back the data to the client in a more manageable way, and of course we can ensure that the data gets back to the client. This works for backdoor layers, it works for raster layers, you can find it in the community section of GeoServer, so it's not part of the releases, but if you check the nightly builds you'll find it, you can do clipping, this is a user interface on top of the process, it's using the buffer process to depict a buffering area, if you define one in the UI, like you can say, okay, I'm going to draw a polygon or a line and then I want to buffer it and then give me the clip inside the buffer, and it's possible to track the download status, again via async WPS, and this is more or less the structure of the system, so we have an app store at the front, it fetches the list of layers from the WMS get capabilities, and then it can call buffer to do the buffering, it calls a process called download estimator because we have limits on how much data you can extract, and via get status it gets to know how far along we are. Another interesting example, this one, it's pure WPS, but it's against remote processes, so processes that are not run locally but on another machine, typical case scientific data processing, you have someone that wrote a Python, Octave, MATLAB, whatever procedure to do heavy processing on the data, it's meant to be run on another machine, not the WPS one, you might have a cluster in between and so on, so we put together another community module that you can find, which is called WPS remote, by which a remote Python or command line tool can be invoked, it uses XMPP as the communication method between the remote process and GeoServer, XMPP is a protocol thought thought out for charts, but it works very well for this case as well, and I have some sequence diagrams, but basically the idea is that you can stand up a new processing node, it will register to the chatroom, GeoServer will recognize this, will fetch the processes that the new node can run and then expose them as WPS processes on the web, when they are called, it will pass all the parameters down, track the execution and eventually get back the results via some shared file system, so the idea is that your GeoServer data directory is shared on the file system somewhere, and well, this is one example of change detection and core registration algorithms that we are running off another machine. Sometimes it's best to do both, so sometimes you have to run both WPS and SQL views, this is a project in which we have to calculate the vulnerability of people and the environment against accidents involving transport of dangerous goods like I don't know, liquid oxygen, petrol, gases and so on. So you can imagine there is a large number of layers, there is a large number of tiny segments throughout, because for each segment every 500 meter segment, we know how likely it is to have a car accident in there. And then we have some buffer distances depending on the kind of goods you transport, a different area is computed and we actually have 51 buffer distances for each and every segment and we have millions and millions of segments on the map. The risk formula is really, really complicated and it's so complicated that actually some parts of it have a meaning of their own, so you can compute it all or you can compute part of it and it has like, I don't know, 20, 30 parameters that you can set up in the client. Visually the result looks like a rendering transformation, so depending on the zoom level we display the result either as a faster grid or if we are close enough to the ground we display the single segments showing the probability, sorry, the vulnerability of the target in that segment, so as you can see some are red, so very vulnerable. How do we compute this very efficiently given the large number of data and the large number of dynamic parameters? We cannot compute all on the fly, we cannot compute all with SQL views because there are too many combinations, we cannot compute everything in Java because there is too much data, so what do we do? We mix and match. So the parametric queries are actually stored in a database and we compose them like a Lego kit and there's a WPS process that builds on the fly the right parametric view and runs it and we have a pre-processed a number of, so part of the computation can actually be pre-processed, not all of it, but we pre-processed as much as we could and the result is that we compute as efficiently as possible in the database on the fly, driven by a WPS process but most of the actual calculation is done by the database. In the database we also do cross-layer filtering, so let's find me whatever intersects these features in another layer and yeah, we also had to work a bit on computing 51 different buffer distances on top of so many layers. This one I'm going to skip, it's not really adding much to the presentation and so this is the end. Thank you. Some questions from the room? Is there any configurable timeout until which the results of the processes are kept? So if you look into your server 2A29, sorry the current stable and maintenance, you will find a maximum execution time which is the total of queuing and running. If you look into your server 210 which is going to be released in October, you will actually find separate controls for maximum queuing time and maximum execution time. So in WPS we have limits for pretty much everything, so not just computation but you can also enable and disable processes and then you can go in the process and say, okay, this geometry cannot be more than 10 megabytes and this value which is a number of contours cannot go beyond 100 and so on and so on. So you can control a bit how much CPU and resources you're going to use. GPU processing, are you using for visualization or for computation? We are using it for computation. This one is actually computing the change matrices and the soils feeling so it's going from data to data, not data to visualization. Actually the last part of the visualization we do with this library called JFO that allows us to apply some raster algebra on the fly and give us a nice output to display. Anything else? Apparently it was all clear. Even I don't have a question. Okay, thank you. Let's wait another five minutes for the next speaker. Johanna, are you already here? Ah, you're there. Thank you.
This presentation will provide the attendee with an introduction to data processing in GeoServer by means of WPS, rendering transformations and SQL views, describing real applications and how these facilities were used in them. We'll start with the basic WPS capabilities, showing how to build processing request based on existing processes and how to build new processes leveraging scripting languages, and introducing unique GeoServer integration features, showing how processing can seamlessly integrate directly in the GeoServer data sources and complement existing services. We'll also discuss how to integrate on the fly processing in WMS requests, achieving high performance data displays without having to pre-process the data in advance, and allowing the caller to interactively choose processing parameters. While the above shows how to make GeoServer perform the work, the processing abilities of spatial databases should not be forgotten, so we’ll show how certain classes of processing can be achieved directly in the database. At the end the attendee will be able to easily issue WPS requests both for Vectors and Rasters to GeoServer through the WPS Demo Builder, enrich SLDs with on-the-fly rendering transformations and play with SQL views in order to create dynamic layers. Andrea Aime (GeoSolutions)
10.5446/20304 (DOI)
So, we all know George from the PiWPS project that he's been active on or still is probably in his spare time. But, so George recently got into soils or actually quite a while already I think. And he's presenting some standardization work in the soil domain. Okay, good morning. This is the last presentation before lunch. So everyone is very anxious to go to lunch of course. Just a comment. It's like in the plenary there was a question, are you a geographer doing IT or an IT doing geographer? Well, I'm neither of both. I'm an agronomist. So if you have questions about plant nutrition, you can also contact me at the end of the presentation. This work, it's, I'm here as a presenter, first disclaimer, I'm a presenter because this work was done by a lot of people and it was the soil ML and the interoperability was pushed by the southern hemisphere. Let's put it, Syro and Landcare New Zealand had a lot of effort in this and I'm just here presenting in more or less in their behalf. Okay, so, yeah, wait, I have your mouse. So let's talk about soils. I know it's a bit weird to talk about soils in the Geoinformatics conference, but it's the skin of the earth. It's not given much attention unfortunately because there's not dolphins or whales and fun stuff in the soil and it's boring, it's below you. You never notice that it exists there, but soils are important and it's how a soil is born, multiple factors, geology involved in a place, there's no two soils equal because it's a lot of diversity in soils and it's an extremely complex team actually. So why is soil so important and another aspect that is not being regarded is that soils contain a huge amount of carbon and I'm going to here just make a bit of PR for soils. So it's like if you increase 0.4% or 4 per thousand of soil organic content matter, you simply destroy global warming because it's exactly the same amount of CO2 that the industry and humans are doing. Just increase a bit the tiny amount of carbon in your soil and you have carbon sequestration. But it's a topic that doesn't appear in the news. It's not popular, like I said. So let's talk about soil data. Soils are mainly point data. Let's put it in the beginning. We have one thing called a soil profile and a soil profile is this sort of vertical structure how the soil is structured because soils changes have you go through the depths. So not only where the soil is in the planet but also how deep the soil is. And we have, what's going on? I think it's the cat. Oh, never mind. That's a fact. So the typical thing is that this is a complex data that we're working with. It's almost like a depth, time, location in the planet. And also soil scientists because you asked 100 soil scientists what is soil and you get answers that will never finish. And we'll continue. So continue with publicity if you want to see about soils. Try also to check the Google tour of the museum and see what is exactly soil profile. So what is our current problem? OK, people work with GML, KML, shapefiles, which are fine for what they do. But to describe what is a soil profile, then it gets complicated because, for example, if you're working on a WFS, we have simple features that describe there's a point and the point has a bunch of data associated in one table. And we need something a bit more specific. So there's our objective to develop a standard for describing soil data and use XML and markup languages to do this. So the thing is that this work of soil anthropability and the test bed was done inside of OGC. That's why we are in this panel also. And my organization for the ISRIC is also a member. And we find that at OGC, it's a very nice place to develop standards and test beds. And I'll continue explaining this a bit why. So how is importance our standards? I don't have to explain you much. It's like harmonization, multiple organizations working together, sharing data. And we simply like having a lingua franca that everyone understands. Like, come on, here's how many languages people speak in the phosphor Gs, but everyone is speaking English. So basically, it's like what we always try to do is a standard and a sort of lingua franca to exchange information. So what is the first step before you can all speak the same language? Let's say we need to organize our data, have everything more or less structured. And before we started working with developing soil ML, one of the things that we need to do is like our databases need to have proper data models, proper structures, and more or less to have things organized. To describe soil data in a database, there's multiple ways different to do everything. It tends to be in post Gs. More tables, less tables, more definitions, less definitions with that could be here hours and hours doing presentation on soils and databases, but it's not the point. So for example, starting point for ISRIQ was that we have a database with points, and what we did was more or less first put everything in the WFS, organize the data so it could be simply online, and then we started to do helping in the development of these standards. The thing is that for soil data, we have already some implementations already in the field. For example, we have one that is ANSI soil ML that is mainly Australian New Zealand. We have from Inspire definition of soil data, and he sold it, an ISO 28258 also called soil ML that was developed by the same team in a few years before. But the problem that we have here with ISOS is that you do an ISO and some people participate and suddenly you start to get these sort of emails, well, won't you have the draft version of the document because we don't want to pay for the ISO, or do you have experience with this? So it's a criticism I'm doing a bit for implementing the ISO standards that's, okay, documentation is paid, people don't want to pay, and I find that it slows down a bit the process, in my opinion. So, oh, pop, pop, pop, pop, la, la, la, racing micro, goddammit. Okay, forget the mouse. So the first step to create the soil ML is more or less to create a sort of data model organization. It's like we need to mimic the real world in sort of computer structures, for example. Soils are related to, we start with like a soil landscape where the landscape where the soil is, and then we start to drill down to concepts like what is the soil classification that we have, what we have in the upper layer of the soil, what, where the samples were taking, the soil horizon slash layers, and we try to describe more or less a real natural event on XML. But to arrive to this point, there's a lot of discussions because people start to think, oh, what is the soil horizon? What is the soil layer? What's the difference? Do we use one term? Do we use another term? And it's a very complicated, not complicated technical, but there's a massive amount of discussion. And the discussion that can continue for months and months and months and nothing comes out. It's, well, it comes out at the end. So for example, the soil profile that you see there before, you create a nice model like this, not going to enter in a lot of detail. And for example, we were, for example, in the release of this, we were working inside the interoperability program of OGC. It was like we had the deadlines to finish things, like for example, to finish the model, to present it, make it online, and make it to work. So actually, this was really nice because it stopped the, let's say, the discussions. It raises one point that people continue discussing this. They said, that's it. It finishes here. Either we agree, we don't agree, we go ahead. And actually, this was good, in my opinion. So at the end of the day, after some on this stage of the interoperability program, we finish with a standard, with a UML structure. It's good, in my opinion. Some development was left behind for another stages. And after that, all of these was more or less transformed in schemas. It was applied. And the next stage was to do it in real time, mainly setting up web services and see if it works or not. This is the strength of the interoperability problems of OGC, in my opinion, because U made a standard. We were working on a standard, but after that, immediately, okay, does it work? That's the big question. And how it works and if it's feasible to do it. So the work that we done, and my organization, Lantcare, CIDRO, was to simply go through their soil databases. In that part, each organization did what they thought it was best to organize their data and put it in proper data tables. And then we started programming to put this as a web feature service. The question here is that, in my case, I use mainly a GeoServer because it's open source. It's more or less used in other projects for doing these implementations. What we noticed was that some bugs, documentation was not clear. And then is the question that if you're working for this in the first time implementing this, is this a bug or I'm simply not understanding it? Then you go to the mailing list. The mailing list helps a lot. But what we want to do at the end of the day is simply map what you have, for example, on your database there on the right, the tables that you have, to something that is very specifically on the XML. So we try simply to go through one simple WFS or something very specific for soils. And the good thing is that, for example, you start to have these sort of WFSs that have specific soil information here. It's a bit too small. I have there the link. But this is the WFS reply and simply get there, like, for example, the soil specific location. And then through the links on the WFS, you can start hoping between the soils located in XY. The soil has different layers. And you go clicking and clicking on links. So this WFS almost starts to behave like a REST interface where you have links and you follow the links and you get proper information. And for example, then things like horizons, like you have one soil profile, you have here description of horizons. And this, I know it's boring XML, but then the thing is that you immediately can have websites working with your data. This was a demo done almost by chance on this soil interoperability where you have here multiple organizations with soil data. The ones in greens are from Isric. The ones in blue, they are from Syro. And the ones in purple, in the purple, you know, in colors, from New Zealand. And this was done by in Australia and they immediately build the sort of a portal. It was just a fast prototype. You see it working, simply gathering information from three organizations. Totally different organizations. Totally different backgrounds. Totally different systems. I was using geo server with postures. SID was using Oracle with geo server. Landcare was using snowflake. And everything immediately, these three participants, everything together, everything working. So this was a nice achievement of the soil interoperability test of OJC. And this is what you wanted to prove at the end of the day that we defined the standards. The standards more or less holding and we immediately have here a prototype. So notes on the development on the OJC. This is also personal opinions is that it was really nice how it was done because the soil ML and soil community was dragging for a lot of time. We need a standard. We need a standard. We tried this. We implemented New Zealand and US. We implemented the ISO that we decided by ourselves. But it never picked up considerably. And by having the soil interoperability is like sort of you have a carrot, but you also have a stick. It's like your OJC defines this is the steps. This is how you're doing. This is deadlines. And then you present a report at the end and you're going to evaluate. It's the sort of professor. And it forces that you implemented. You see how good and bad it goes. So also the process is totally transparent. There was Wiki. We have documentation. We have the reports online. You can find information online. And unlike ISO that everything is a bit hidden, we simply give the documentation to everyone. So if anyone comes, oh, I want to know about soil ML, here at the PDF, have fun. Next steps. The next step I'm taking personally is here in the Foss4GIS. I'm going to see here a presentation from the Hale Studio because the schema mapping and the programming between your server and the tables you have in Foss4GIS done by hand is tiresome and complicated. So we are looking at tools, how to do this, how to pass information to having the database to the XML and to the WFS. And there's a Hale Editor where you can plug in some schemas and go from there. How fast I'm going? And you're not paying attention. Okay. Okay. So schema mapping on the easy way because programming XML by hand and all these things is really painful. So next steps also, more organizations participating on this, this interoperability. Well, interoperability was achieved, but continue working a bit with more organizations. So the ML has a standard for OGC. So we have a very, very good and well-written biolisted interoperability reports. And then from there, we continue for a standard. And also dissemination of know-how and the typical bugs that are there. So in a couple of, one, two years, I don't know when, hope we're going to see a presentation from a team where you see there, so the ML has a standard and the tests, the driven test because I like very much what the OGC is doing concerning test-driven development because I call it developer candy. So yeah, that's it. I'm going to have to repeat the question. So any questions from Vroom? So how many people are soil scientists here? One, two, three. Actually, I'm much study, I was a soil scientist. So a question there, yeah. Thank you. Thank you, George, for a very interesting address. I'm wondering what kind of tools do I need to publish soil ML data and to use or to acquire soil ML data? Because this is the next step for adoption of this standard. To do it, what sort of tools you have? You have a lot of tools to implement it, mainly on the GL server. Sorry. On the GL server, you have their documentation, which could be a bit better, but you have the GL server that has all the tools that you need more or less to implement it and then also you can use it post-jays as a database of reference behind. And that's more complicated because still there is no clients because it's something that started one year ago. But immediately, if you are a developer, you have the specification of soil ML and you can do a website like they did for the Federated University in Australia where you simply have the initial structures and you still follow the links and you can mail a GUI on the website. But at the end of today, no. In the future. Probably me, probably. So you didn't try the WFS2 plugin for QGIS? No. It didn't work or you didn't try it? Yes, I tried. Okay, it didn't work. Yes, so I think currently there's some new developments going on in QGIS to improve the WFS2 support. So I think soon maybe. Because one comment, yes. WFS2.0 is really nice. We have the functionality it has and the new implementation like paging and so it supports a lot of things. Yeah. More questions? Yeah, here. Not yet. Not yet. Hi, thanks for the presentation. How does the soil ML schema relate to the Inspire schema? Are these close together? Since we have soil data in our own schema now, we have to harmonize it to the Inspire schema anyways. So I was wondering how does these two relate? We had a very, very good help from Giovanni De La Albo from Italy. He is very familiar with Inspire and I help us more or less to make things a bit similar but still a bit different on what it was needed. On the soil interoperability report, there's a comparison between soil ML and Inspire. I can help you with that. And there's some features if I'm trying to see the table itself because all the classes, everything is compared on that table. There's an alignment that both are very similar actually. And also the soil ML is a GML friendly based on the model and observations and measurements also. So I don't think it's going to be much of a problem to passing one to the other if you need to transform. Okay, last question. No, nobody. Lunch. Thank you so much.
Soil data is crucial for environmental studies and analysis, but access to it and proper exchange formats and mechanisms are still poorly developed. The OGC Soil Data Interoperability Experiment (SoilIE), undertaken in the second half of 2015, had the objective of developing and testing a soil standard that harmonised existing standards defined in Europe and Oceania. During the SoilIE, participants from Europe, North America and Oceania mapped data in their soil databases to the SoilIE XML schema. Multiple OGC Web Feature Services (WFS) delivering soil observation data using the XML schema were established, along with OGC Web Processing Services to allow on-line derivation of new data. The SoilIE was successful with access to data in multiple clients from multiple soil data providers, each using different software configurations. The interoperability results will be presented along with next steps on progressing the SoilIE XML schema, RDF vocabularies, linked data and remaining major issues. Jorge Samuel Mendes de Jesus (ISRIC — World Soil Information)
10.5446/20302 (DOI)
So, a hard welcome from my side for this last talk in the session before the lunch. So your main task is that you really prepare for some questions and not go because you're just hungry. And the task of those two people, Tobias Sauerwein and Marin Baumgartner, is to tell us what they can do with MapFishPrint version 3. I think most here may know MapFishPrint. If you want a map from your GIS, WebGIS and you want it as a PDF, then MapFishPrint is your way and the details we get from those two persons. Thank you. You're 20 minutes. Thanks for the introduction. My name is Tobias. This is Marion at my site. We both work for a company called Camp2Camp. We are a European company with offices in Switzerland, Germany and France. We work a lot with Open Layer, Cesium and many other open source software that you hear about at this conference. We are hiring, so if you are interested, please check our website. I guess many of you are doing web mapping projects and the common requirement is printing or PDF export. Luckily, you got a few options and depending on your requirements, you can choose one of those. The easiest way is to use a print CSS style sheet. Google Maps is doing it like this, so you simply define a custom CSS style sheet, which is used for the default browser print. Another way that me as a JavaScript developer, I'm excited about is to generate PDF directly in JavaScript on the client side. There is an Open Layer 3 example. It takes the map, creates a PDF directly in JavaScript and you can download it. It's also possible to generate the map in higher resolutions. And similar to the first option is WKHM to PDF. It takes a web page and generates a PDF. But unlike the default browser print, you can define custom headers, custom footers and you can also define a table of content. We use this for a project that we developed for the World Bank. We had a designer do the design of our web page and we wanted our reports to look the same. So we consider to use map fish print, but that would have meant to redo the design in the templates. Instead, we have chosen WKHM to PDF and you simply take the normal page, add some custom CSS for the print and then you generate a PDF. This is also a very nice way. This runs on the server side. It's something that you call from the comment line, basically. This is based on the rendering engine for Chrome. Otherwise, generating a PDF doesn't seem to be a difficult task. You can use some of the PDF libraries. So you can simply draw your title by hand. You can download a WMS image and insert that into your PDF. At first, this seems easy, but then you want to support other geodata formats and every time you want to change the title, move it a tiny bit to the left, you have to go into the code and you have to ask a developer. So if you want to choose one of these solutions, you have to ask yourself a few questions. One is, do you want to print your map in higher resolutions or are you OK with the default screen resolution? Then do you want to just support the default A4 page format or do you want to do a higher page format like A0 or A1? And also, which geodata do you want to support? Are you OK with just downloading a WMS image or do you want to render vector data? Then the layout thing, how are you going to design your report? Do you have to go into the code to change the design or do you maybe want it to do like in open office or in Word that you click on a text, change the font size, the font and move your elements around? Also a question to consider is if you're generating your report on the client side, on the server side, if you're doing it client side, you don't have to do the server infrastructure, but it also means that you're putting more work on the client. If you're doing it server side, you can more easily generate large page formats, higher resolutions, but it also means that you have to maintain the server infrastructure. In the following, we are going about our solution, MapFishprint. Basically it's a Java library that you use to generate reports with maps and map-related components like a scale bar or a north arrow. Yeah, it's a Java library that you can use in your own Java application, but it's also a Java web application that has to be deployed to a Java web application server. MapFishprint 3 is built on three open source libraries. We are using geo-tools, which is also used in geo-server for generating the map graphics and also for parsing the geo-data. We are using Jasper reports. This is a templating engine, which also includes a template editor for the layout. At the end, we generate a map and then pass it to Jasper reports. Jasper reports is doing all the layouting, creating different pages if needed. For the architecture of MapFishprint, we are using the Spring framework. Everything in MapFishprint is a plugin, which makes it easy to extend. At the end, MapFishprint itself isn't doing that much. It's just providing the web API, some security stuff, and widgets like the scale bar, north arrow, or tables. Let's take a look at the print process. If you want to generate a report in MapFishprint, you need a configuration. You have to define a print app. That's how we call it. A configuration consists of a configuration file, this config.yaml file, and a report template. And the configuration file, you define what you want to show in your report, for example, a title or a map, or that you want to use a scale bar. And then in the report template, you position the elements. You say, my title is at the top. It's using this font size, this color, and you are positioning the map and all the other elements. Then to generate a report, you have to send a print request. In the print request, you provide what you want to be shown in the report. So you're saying, my title should be this. My map should contain these layers. I want these features to be shown, and I want to show this map extent. And Marianne is going to tell you more about how the configuration looks like. Okay, so I will tell you how to configure a print report. And it's basically the main configuration is done in the yaml configuration file. It consists of a title here. In the second line, it's the A4 portrait. And in the third line, you define the actual report template that you will use. So in this case, we're using the report.jrxml. Then it has two main parts, the attribute parts and the processor parts. So the attribute part consists of different attributes. In one, it's the direct attributes that we use for direct inputs in the Jasper report. And then there's some indirect input attributes like the map and the scale bar that are passed to the processor, and they are processed further. Then the processors, they actually do the work of building your report, creating the map, and for example, also creating a scale bar. So that's where the actual work is done in the report. And then the second part is to create the basis of the report or the report structure. And for this, as Tobias already mentioned, we use a Visivik editor called Jaspersoft Studio. And this is very convenient because you can actually drag and drop elements into the center part where you see the main report that you're editing. So from this part here, you can take text fields and add them to the report. You can also change the font size and the font type of an element. You can add images. Over here, you have an overview of what has been added to the report, and there's different views. What you can see here is the design view. Then you have a source view where you can actually see the source code. So the source code is based on XML, and you can edit and view that in a second. Switching can be done in the bottom. And the third is a preview, so you can actually look at what you have done. Then the third part of a print report is the print request. This is a JSON file, and it contains the actual data that's included in the report. So for example, if you define the title, then you actually define it here. So up here, we see that the title has been defined as a sample print, and it will be placed in the place where the title is mentioned before. So it defines what map, what protection, what DPI, where the center of the map is, and what layers are on the map. And then one last thing that's kind of important is if you have a map, it's not done yet. Usually, you want to explain a bit more about the map. So you want to add a legend. You want to add a north arrow. Maybe a scale bar, an overview map. And the nice thing about this is that it's all possible to add these kind of widgets to a map or to the print report using the configuration files. So I will pass back to Tobias, who will explain some more. We support a number of geodata formats for vector GML, which comes from WFS, geodaysen. We support Tite servers like the M-tiles, WMS, and also Tile WMS, and WMTS and Geotube. But we're using Geotools for the conversion of the data. So it would be easy to support other geodata formats that are supported by Geotools. To style vector layers, you can use SLD, which supports the default styling of Geotools. We also have a custom JSON styling format, which is similar to SLD. So you, I don't know if you know SLD, but you have symbolizers. So you have a line symbolizer, and you're saying, I want this line to appear in green color and with a stroke width of two. You can also define selectors. So here we are selecting all features that match a buffer around a polygon. You can also select features on attributes. For example, using the ID or some other attributes. This is similar to SLD. It's come for free, it's supported by Jesper reports. So it's very easy to create tables. Something very interesting is if you want to have multiple maps. This is done with a data source. So if you want to show multiple maps in a report and you don't know in advance when you're writing the configuration, how many maps or how many entries you want to show, you can configure a data source in MapFishprint. And then if you send the print request, you say, I have three restaurants and I have a title for each restaurant. And for each restaurant, I want to show a detailed map. That's cool. Charts are also supported out of the box by Jesper reports. And what I said about these multiple maps, in this case MapFishprint is creating a data source. But you can also connect to other data sources. So simply databases or files. When you do a print, MapFishprint receives the print request. MapFishprint is generating the map by requesting web services, by requesting tile services. And then it passes on the information to the Jesper reports reporting engine. And from the template, you can also request data. So if you want to show additional data, load additional data from a database that doesn't have to be included in the print request, this is also possible. And it's very powerful. Then you somehow want to use MapFishprint. You want to include it in your existing applications. You can either integrate MapFishprint in a UI running on a web page. Or you can also connect to it from a server side. We at Cam2Camp are developing a library called NGO, which combines OpenLayer3 and Angular. And using NGO, you can generate a UI automatically based from a print configuration. So the NGO passes the configuration and then generate input fields. For example, in the print configuration, in this example, you had a few title, a few comments. And then this UI was generated from that. It also, if you're doing the print, this is an OpenLayers map. And it checks what layers are shown in this map. It checks what features are shown and then converts the OpenLayers layers. And the OpenLayers features to the MapFishprint print request. DOX3 also supports MapFishprint3 if you're more into XJS. But the web API of MapFishprint is, I would say, easy to understand. So you can easily integrate it in your application. I mentioned before that the architecture of MapFishprint is a plugable plug-in architecture. So everything is a plug-in. All the processors, the processor, for example, to create a map, all the attributes, all the layers, all the riches are plug-ins. So if you are missing a format, you can easily create a new plug-in and run MapFishprint with this plug-in. The website of MapFishprint is linked there. I'll quickly jump there to show you. There you get a link to our documentation. On the documentation, we have an introduction workshop. If you want to learn more about the MapFishprint, I recommend you to follow this introduction workshop. If not, in the documentation, let's check out the CreateMap processor, which creates the map graphics. We often have links to examples. Each example has the configuration file and also the template. We also have this folder, Expected Output, that we are using for our automated tests. There you can see what the report is supposed to look like. We have tons of examples. To learn more about MapFishprint, it's a good way to take a look at these examples to get started. So that's it. Slides are available under this URL, and you can find us on GitHub and Twitter. Thanks for your attention. So thank you very much, both of you. Now I see if the task that I gave you is fulfilled, so your questions, please. Okay, a long way for me to walk, but wait until you get the microphone. Are you doing anything clever with labels on RasterBaseMaps? Because 72 DPI is pretty much for every print to paper unacceptable. If you go to a higher resolution, then the labels will be too small to read. So are you do some resizing or something clever? If you're using WMS and you have, let's say, a map of 500 by 500, and you want to print in a double resolution, we are requesting a bigger image and then resizing it. Yes, and many WMS servers also support a higher DPI option, so you can say, I'm printing this map in a higher resolution, and then the labels will be scaled accordingly. Consider you have a map in the PDF and the table. You typically want a matching between the rows in the table and the features on the map, like having a running number from 1 to X, which is also displayed on the map. Is there somehow built in support for such transient numbering? No, you would have to do it when you are generating the print request. So you would add an attribute to the first feature and then also the same number to the first row of your table. So no automatic support you would have to do it when generating? Through processors? Yeah, it's two different processors. You have the map processor, which creates the map with the features, and then you have the table processor, which generates the map. It's separate. That would be the way to go. Yeah. Okay. Do you have one example? You said you have a lot of examples in the documentation. Do you have an example there for the external database that is directly accessed from, or external data sources that is accessed from Jasper Reports? I think it's not in the examples there. No. So somebody should do it and pull the request and then it will be added. Okay. Thank you. More questions? So at least see a lot of hungry faces. That's the right point in time to tell you that lunch is prepared and if you are hurrying up, all the other talks may still be discussed. So you are the first ones to get the dishes. Thank you.
Generating reports is an important feature in many web-mapping applications. MapFish Print 3 is an interesting tool for this job. The project MapFish Print project consists of a Java library and a web application for generating reports with maps from many different raster and vector sources, like WMS, WMTS, tile services, WFS or GeoJSON. The integration with the reporting engine JasperReports facilitates the creation of complex reports. A WYSIWYG report designer makes it easy to layout report templates and to position tables, graphics, diagrams, sub-reports, maps or map components like scale-bars or legends. This talk introduces MapFish Print 3 and addresses the following topics: The architecture of MapFish Print 3 The configuration of report templates Using the report designer Examples for complex reports JavaScript libraries that ease the integration with OpenLayers projects Upgrade from the previous version New features and current developments Tobias Sauerwein (Camptocamp) Marion Baumgartner (Camptocamp)
10.5446/20301 (DOI)
So, please welcome Benjamin Prost for the first talk of this afternoon about geoprocessing REST API. Yeah, thank you, Jérôme. Hi, Benjamin Prost. I work at 52 degrees north from Munster, not very far from here. Yeah, and I will present the geoprocessing REST API that we are currently thinking of and we are currently developing. So first, I want to introduce the OJC web processing service in a few slides because the REST API is built upon this. Then the REST API itself. We introduced, after that, I will talk about a proxy that we have developed that implements this REST API. If you have time, I think so. There will be a demo, a live demo, and then last but not least, the outlook. Okay, so the WPS standard web processing service from 2007, I would still say still fairly young standard. Yeah, it's all about web processing. So it's a standardized service interface and it's used to describe process offerings, describe the inputs and outputs of these processes, and yeah, it has an execute operation to execute the processes. No processes itself are specified. That is up to the vendors. Yeah, and the processes, there's a wide range. But they can be simple calculations like intersection or something, but also complex computational models, weather forecast or something like that. Or it's also used as an interface to legacy software like Rust. Yes, for example. The version 2.0 is out more than a year now and the REST proxy is also based on this version. So we can introduce the operations, the capabilities, common operation in the OTC world, returns information about the service, can be accessed by a KVP or plain old XML. The process operation that returns information about the specific process, you give us an input identifier and then you get a description of what this process needs as input and what it produces. Also accessible via KVP and POCS. The execute operation is only accessible via POCS in the new standard. So KVP was removed and with that you execute the process and giving the inputs that it needs. So two new operations introduced with WPS 2.0 get status. For obtaining status updates about the process, mainly a percentage you can get. And if the process is finished, you can get the results with the get result operation. So it's available in KVP and POCS binding. So let me quickly describe the execution modes because this will be important later. So the normal synchronous execution goes like this. The client sends a Crest to the WPS, receives it and then it processes the output. It does the work. And during the time which can take some time for complex models, the client has to wait for the response. If the process is finished, the WPS sends the results back in the HTTP response. Of course if the connection is lost in between or something happens, this specific process result is lost. So for extensive processes that take a long time, the asynchronous execution mode is more applicable. And with this model, the support-based model, the client again sends the request, the execute request to the WPS. The WPS receives it, sends immediately a response back to the client containing an AID. And the WPS starts working. And this connection is then closed in the very moment. The response is received. The client can do something else. The WPS is still processing. Now the status requests come into play. The client can just question the status of the process from the WPS via the sketch status request and the WPS sends, for example, a percentage. Now the client can do something else again. And if the process is finished and the client sends a status request, it gets a result and can process the result again, the client. So so much about the WPS. Probably many of you know that already. Now to the exciting part, the geo-processing REST API. So let me just directly go in here. What you do is you have a base URL for the WPS or the endpoint. And you just send a get request. So we at the moment are defining no pattern for this endpoint URL. It could be anything. And if you send a get request to this URL, you get the capabilities. In the moment, this includes a list of processes with URLs also attached. I'll come to that in a minute. That is what it looks like at the moment. It's just translated from the XML. Not so nice. This is subject to change. Still another development. So this is the first entry point to the REST interface. You can attach a slash processes to the base URL. And what you get there is just the list of short process descriptions. It's more or less taken from the longer get capabilities JSON response we saw before. What's new, what's not in the WPS standard is that you get an URL. So to say, of the process, which the client then can directly access again. And this URL, if you send a get request to that, then gives you the process description. Also again in JSON at the moment. So now probably the most exciting operation. Execute. We have the same URL, just base URL slash processes slash process ID that we had for the process description. Now you just don't send a get request, but a post request to that. And the post request has a body, the execute request in JSON. You can attach an optional URL parameter to this URL. Assume execute. True or false. The default execution mode for the REST binding will be asynchronous. Because we thought it's more REST like. And the executes request is again at the moment more or less just the XML executes request translated to JSON. You can either put the inputs directly in there like here, well known text in this example or probably also more REST like you can have a reference to some external resource. And the input field. So what happens if you send an asynchronous default execute request, immediately a new process instance will be created. So that can be seen as a resource, a new resource is created. That is accessible via an ID. You get an empty response back, more or less the information is in the headers. So you get a 201 header. So that's the created header. And you get a location header back that contains the link to the created execution instance. So that's the next lead you can follow in this whole REST API or chain. And for swing execution, the non-default case, you will have to wait. Like I said, like I showed before, you have to wait until the process execution is finished. And then you directly get back the JSON result document containing the outputs or an exception report. Status info to get status information or operation. Yeah, we have this ID, this URL in the location header. I just mentioned that. And if you send a GET request to that, while the process is running, you'll get back in JSON status info document. Of course, this is for async only. For sync, you don't have this URL. Because you don't have a process ID, it's finished directly. This is the status info document also from the XML translated. Nice and short. This example shows the already finished process execution where you have additional output parameter with JSON URL pointing to the process outputs. So the next link here. To get the result, get the output. You just access this URL via GET request again. And then you get back in JSON result document. This is how it could look like. Again, in this case, the output is in line, the document. This is OK, I think, for things like renault text or geoJSON. Of course, I don't recommend to put XML in there. For this, you could request the output as reference and the execute JSON request. Then there's new operation. I didn't mention that before. Introduced in extension for the WPS2.0 specification. It's called the dismiss operation. With that, you can stop running processes or remove finished processes. So the outputs and any artifact that the process created. This is done via the same URL we saw before. Only that now you send an HTTP delete to this URL. And then the process stopped or dismissed, respectively. You get back in status 200. Now if you do that again, that's a bit tricky. We decided if you send another delete operation to this URL. Probably what should happen is that you get another 200. But in this case, we say, OK, that was done before. We send back a 404 client error. All right. I think we have some ideas to extend the API. This is based, as I said, on the current WPS2.0 standard. Things that are not specified there, for example, is a list of running jobs. So an operation, for example, to list all the available IDs of these jobs. I think that was in the URL, actually. Maybe you saw that. But this would have to be vendor-specific to support because it's not in the WPS2.0 specification. And you could think of getting this existing jobs either for a process or for the whole WPS or REST endpoint. Another nice idea would be to get one output by its ID. In the moment, you either get all process outputs. If there are more, you get all process outputs. It's not defined how to get a single output. And it would be nice to have that in the REST API and maybe also in the WPS. And if you have that, you can also think about attaching or requesting different formats of these outputs, which is also, I think, a REST feature. You can either attach a URL parameter format after this URL format equals JSON. And then you get the back of that output in JSON or format equals XML. And then you get some GML, the same output. This is not specified in the standard. So we didn't do it at the moment, but this would be something nice to have in the future, I think. OK. The RESTful WPS proxy, one slide about that. Yeah, that's an open source project that we created. And this really is a proxy for any WPS 2.0 server that offers this geo-processing REST API I was just introducing. It's based on string. I think quite easy to maintain. It's a GitHub project, so it's all open source. Check it out. Try it with your own WPS, maybe. And file some future requests or issues if there should be one. OK. So I think we're already time for the demo. So I hope everything works. Let me just quickly change to Firefox. So there's an instance running at our demo server. I can access that. OK. So it works. So this is really just the base URL of such a geo-processing REST API instance. So this is boring JSON. But here's a list of processes, and I can really just click on here and go to the next step. OK, but the more important things are done using POST. And for that, there's a thing called Postman. OK, so from the start, this is a execute document. You already saw a snippet of that before. This is a very simple algorithm just doing a convex hull. Yeah, forgive me for not doing something more complex, but the demos, I'm hoping this will work actually. Yeah, specify the input inline. It could also be a reference. And here we define at the moment the output format. And whether we want to have the output as value, also inline or as reference, then we would get a reference to the output. So now if I send this, I hope it works, this is the asynchronous case. So I don't have this URL powering at the top. And then I more or less immediately get back in result and response. And here we have this header. I don't see the 201, but it should be there. More important is also the location header. If you follow that, then we can go to the status. OK, now it's finished. I was slow. If it would be running, you would have a status running. And the output would be a percentage. But now it's finished. And we get back the reference to the output. We only requested one. We can just simply follow that here, send a gut request, and we get back the result. So I have a couple of other requests prepared here. We can do a sync execute of the very same process. OK, so maybe you didn't see that, but it's already finished and has sent back the result. This is the very same result document we saw with the Erzink. That's another job ID. Now we can do that again with the reference. How do I have that? It doesn't matter. I did it already. So actually, this was sync. And the outputs should have been transferred by reference. So actually, I was wrong. It's not equal to the first one. The first one had directly the output polygon. And here we have a reference. If you send another gut request to that, we directly get only the convex hull of the input. OK, so I think that's enough for the demo. Yeah, a few words about use case. So part of this work we have done here was funded by the WaterNU project, especially the RivaVesen Interpability Pilot or Rebase. And here we have the architecture. So we want to have predictions of flooding areas. And the inputs are some gorge data from water gorges like you have here on the RivaVein and the Digital Elevation Model. And that goes into AWS, which encapsulates this flood model and then sends back alerts and the affected areas. And for this, we're planning to use also the REST API. And yeah, I think this is a quite nice use case. So conclusion, yeah, I introduced the REST API to you just a few months ago. A word to the Richardson maturity model. You may have heard of that. And we think this REST API is level 2.5. I mean, we have lots of things there. We have been using HTTP words. We are using references. Yeah, but yeah, it's a bit of a question whether this whole processing and executing really fits so well to the REST paradigm. So I would say it's level 2.5 in this model. And we can discuss that maybe later. The JSON, I was mentioning that is subject to change. It's not very nice right now at the moment. Maybe we have something better in the future. There's some efforts ongoing at the moment at the Open Geospatial Consortium about this REST and JSON things. And there will be a public engineering report at the end of this year. Summarizing different approaches. So there are already some other REST approaches in the OJC world. For example, for WMTS. Now we have this for WPS. And in the ER, all these efforts will be summarized and they were given some recommendations. Okay, and we hope that this REST API will also go through the OJC summarization process. Thank you very much. So any question? Very interesting presentation. Thank you. My question is can I safely assume that by using this REST API endpoint, my original endpoint because the WPS server exposes some HTTP endpoint, it's not hidden. Because the thing is I have a WPS server and we have clients which are already using it as an HTTP endpoint. Just by introducing this REST API, I hope that the existing endpoint HTTP, it's not REST, but it's not hidden or somehow it's still accessible, right? Yeah, you can do it like you want. You can do it with a hidden service or with a publicly available. So it doesn't interfere with the existing endpoint? Just sits on top of that. Completely loosely coupled. Okay, and you don't expect this REST API, whatever this library to work with 52 North server, but any WPS server should be fine. I think so, yeah. I didn't actually try, I have to admit. Because if you use a different WPS server and want to add this capability, I hope it's... Yeah, and that's why we did the proxy. I mean, that's really just simple. Just point it to another WPS and it should work. So, thank you. It's a valid WPS, so to say. Any other question? No. So, I may have one. I noticed that on your slide, some version in JSON was 1.0.0. So, I was wondering if it is not possible also to use it with 1.0.0 version. I mean, to put your proxy in front of the WPS 1.0, maybe with some limitation? Yeah, it's not possible at the moment. So, it only works with WPS 2.0. Because of the lack of the get status and get result request? For example, I mean, it would not be hard to make it also work with WPS 1.0 button. As the new version is out, we think we support that one. So, thank you. Now you can move to another session if you need.
We are seeing an increasing demand for a standardized REST binding for web-based geoprocessing. In this talk, we will present the ongoing discussions and developments that will lead to a RESTful binding for WPS 2.0. In the ongoing OGC Testbed-12, REST bindings for different OGC Web Services, among them WPS, will be developed and described in Public Engineering Reports. 52°North is leading the developments regarding WPS. However, this effort will need the support of the interested communities inside and outside the OGC. We want to use this talk to inform the audience about our concepts for a RESTful Geoprocessing API and we are eager to getting input for the way to go. Benjamin Pross (52°North GmbH)
10.5446/20299 (DOI)
Okay, everyone, thanks for coming. This is the four o'clock block. It looks like we have three talks on semantic applications. And first up is Francesco Bartoli from GeoBeyond in Rome. And him and I work together in the GeoNode community, and he's going to be giving a talk on his project in the Semantic Web. Thanks, Jess. So today, thanks for coming. Today I will speak about a web API build on top of GeoNodes. So my company is GeoBeyond. We have expertise in geospatial solution and identity and access management system. We are a partner of Boundress as a solution provider for the OpenGeo Suite. And also we are a founder of Rios, that is an Italian professional open source network. Currently we use the GeoNode as a complete geospatial solution for building special data infrastructure. This is the base overview of the user interface of GeoNode. We are currently approaching the release of 2.5, and for the end of the year will be released 2.6. We use GeoNode as a special data infrastructure for SnowAvalanche information. We have around geoAvalanche.org that is a geoportal for such information from three years ago. So what is essentially GeoNode? It's an open source geospatial content management. It has a lot of different framework that collaborates. Essentially, in principle, there is a Django MVC web framework. GeoNode uses GeoServer as a geospatial midwyer for the OGC web services, and PostGIS for managing the storing of geospatial information. GeoNode also has a catalog management that exploits the feature of PYCSW or optionally GeoNetwork. It has a web GIS component in the front end, essentially where you can put layer in a map and edit and save maps, and such web GIS component can be exploited by GeoExplorer or Maplume as well. And essentially, it is used as a web mapping framework, open layer, or optionally you can also plug into the Django application leaflet. And also, as a country apps, you can plug in GeoNode, GeoGeek, that is a spatial data version management, or a full text search, like elastic search. So you have a lot of possibility to build your spatial data infrastructure with a lot of features. This is a quick overview of the architecture and the component that I mentioned that cooperate together. So essentially, the main components are Django in the front end as run into the web server, that can be Jinx or Apache as well, and GeoServer as a mapping engine. So what you can do with GeoNode, you can share GeoData on the web, you can create and style maps, create and edit the geographic feature, you can manage and publish metadata, you can also decide what your user can see, what your user can edit into the content management with an access management system, you have catalog service where you can put all your metadata, and also you can share maps and layer on the web with a social sharing functionality. Okay, this is the overview of the user interface for layers. You can see in the main block GeoExplorer where you can load layer, you can save maps, you can also download layer, edit the layer, and create new map for existing layer. So layers can be visualized automatically with a bounding box, can be edited with changing the data, the style, the metadata, and the permission. So basically which user are allowed to do something with the layer, a layer can be downloaded with several formats, basically those supported by GeoServer, for example GML, GeoJSON, CSV, etc. And even metadata can be downloaded with the format supported by PYCSW. This is the user interface of maps, you can retrieve essentially GeoExplorer as a WebGIS component. Here you can, after you load layer, you can edit the layer, edit the data inside the layer, and save new maps essentially. So basically map is an instance of the viewer, composer of GeoExplorer. So you can manage also the order of the layer, the style which can be saved with maps, and filter edit if you load in a second moment the maps. And also you can define the right privilege for your maps. So what is GeoLinked data? Basically it's a Django application developed on top of GeoNote with a custom GeoNote project. The main feature is that you are able to publish interlinked GeoData. Currently such feature is limited to shapefile, but can be extended. And such GeoData can be converted into a triple store and loaded into a Sparkle virtual backup. This web API is based on Django REST framework, that is an extension of Django for publishing REST API. It has a web-brusable API, so where you can essentially have a look in a web interface of all your API and let you can give a try from a user interface. And from a security perspective, it supports several different protocols like OAuth 2, JSON, WebToken, etc. And the format supported by the triple store are XML, RDF, and T and triple and Tartle. And basically, sorry, these formats are derived from a tool that is GeoTriple. But I will show you in a while. So what you can do with essentially GeoLinked data, you can create a new project from a stable version GeoNode and install the GeoLinked data API as a dependency. So basically you can do in Django DjangoDame Insta project, the name of the project and the template of GeoNodes and simply do PIP install GeoLoad API. And then add all the dependency apps to the installed apps into settings. We have in GeoLinked data five endpoints and essentially they are data where you can browse shape all the shapefile that you have converted and loaded and also the endpoint for converting the shapefile to the triple. Plus, an endpoint to manage the user authentication. Basically this is the shape to try to triple endpoint. And as I mentioned before, you can simply try with your shapefile how the GeoSpecial data can be converted into a semantic format like RDF or N3 and et cetera. Or you can just run from the common line a car execution by passing some simple parameters that essentially refer to your shapefile and the endpoint of the API. This is the endpoint for loading the data that you have converted into the semantic data store. So you have to decide some parameters that can be added to your inserted in the web interface or you can as well pass such parameter to the command line with a car code. In the backend there is an orchestrator that essentially manages the communication with the backend server. So essentially this is a Node.js application developed with the framework express and the API can call such orchestrator to load GeoData and metadata into GeoNode to transform GeoData to tripels with the triple Geo tool that I mentioned before. And finally to store tripels into a virtuoso semantic database. This is a logical architecture so we have essentially different open Geospatial software that collaborates together to get the job done. So we have GeoNode as mentioned you can have the Node.js application, the triple Geo server for converting actually converting the data and the Django REST framework to expose the web API to the public. And in the backend also the virtuoso server that is able to receive the semantic format. Where also you can after when you have loaded and put some data into the semantic backend then you can also search for such data with for example a Spark query. And finally I would like to give you some links about the repository for the project. We have different repositories, the GeoLink data REST API and the GeoNode application build on top of GeoNode. And also I put the link for the triple Geo web service that actually does the job to convert the shapefile to the RDF format and the Java library behind such web services. And finally the Node.js orchestrator for orchestrate all the call between such servers. So that's all. If you have a question. I don't know, is Shang-Gili's in the room or is he here in the conference? You know this library Fiona which is kind of a Python binding on top of OGR. It also has the capability to transform into triples so that might also be an option in your architecture. No, I didn't know that. It's good to have a look. Works great. Okay, cool. Any other questions? Okay. Thank you Francesco. Thank you. And I'll be back. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks.
Publishing open data is a trend movement but still nowadays geographical information are often released as shapefiles despite this common format isn't recommended for such scope. We have used GeoNode, a spatial data infrastructure for publishing open geodata through standard OGC Web Services, with a RESTful API to model such resources to the semantic interoperability. GeoLinkeData is a django application based on a GeoNode template that allows to publish interlinked shapefiles as triple stores and search them with GeoSPARQL queries from a Virtuoso backend. Francesco Bartoli (Geobeyond Srl)
10.5446/20296 (DOI)
So, let's start with the next presentation. Next we have Johan van der Waau, who is really, I haven't met him before, although he's quite close to Netherlands, but I saw a lot of his emails on the mailing list and he's a big organizer of conferences on open source in Belgium, but I was actually not familiar with your work itself, so I'm quite curious to do work. Good luck. So, thank you for the introduction. So, well, I'm going to talk about Saga and the title of my presentation was Automating Your Analysis with Saga.js. The first thing I want to know is who of you has used Saga? Okay, so that's like one third or something. So, yeah, I will focus a bit more than also on what Saga is and what it does, and then in the end I will go a bit faster on how to automate things, because I believe that there are a lot of slides actually in the end, because it's also a workshop which we can do. So I'll put the presentation online and you can look at them more slowly when you want. Anyway, Saga. So, well, Saga is a GIS system. Yes, good. It's no longer a hip term, I heard, but anyway. It's written in C++. It has an API which you can use. And actually, if I would describe Saga, I would not say that it's like a program like a desktop GIS program, but it's mostly a toolbox. It's a toolbox which has tools to do, well, let's say originally it started with terrain analysis, so making hills, shades, making water sheds and those kind of analysis. But like slowly evolving, a lot of GIS tools got added and I think nowadays we have tools to use for vector editing, for analysis of a lot of different things. You can run them through the graphical user interface which you see in the top. You can use it from the command line and you can use it in a number of scripts. There are also a number of other programs I will talk to you about that will use Saga. I think the most important thing here on this slide is it runs on Windows. It runs on Linux. It's very easy to install. So it's you don't break your system. It's here, it says it's even you can use on USB sticks. And actually I always carry one Saga with me. If I'm with a customer or a client and I just plug it in, usually they use Windows and I can immediately use it. I don't have to install anything. So even for that, I think it's a very useful tool to know because if you need to have like a little GIS application to change small things, Saga could be a good choice. It has been under development for about 10 years with a relatively small team. And we have about, I think, 7,000 downloads per month. I set a small team. Actually, there are two main companies or two main groups which contributed to the development. It's the University of Hamburg and it's Laser Data in Innsbruck, which is a company who is doing a lot of analysis using LiDAR and those kind of things. And actually they have built also some proprietary Saga modules, which they, well, use in their solution for their customers. But they also heavily contribute to the Saga development itself. So you don't see my name because I'm from Belgium. So I just, one of the people like from the community who contributes, I did contribute especially a lot in the past, but recently I've also been developing some modules and I think I'm also doing this outreach. I think it's very important. Making sure the connection with other projects is good. Anyway, yeah, I think I said it before. So tool sets, we have about 670 tools I just looked and now it's quite interesting. You see the number of tools that was always growing and I didn't feel like remaking the graph completely, but now it's going down a little bit. And actually the reason for that is one of the things which I'll discuss in the end is that we've now made it much easier to combine a few different modules and to make it like a workflow yourself. So previously we would make a new tool which was doing nothing else than calling triodry modules. And we've removed those and we've replaced them by scripts. That's why the number now is going down a bit. If you open the graphical user interface of Saga, maybe I should just show it in real life. What you usually see are a few different blocks. If you open a data set, it will not show up in the view. You have to search it here in the data. If you double click it will open. And then the most important thing you cannot see now, it are actually the tools themselves. They sit here. Why are so few? Okay, it's a developer machine. I'll show it here. So if you go to tools, I'll just load all of them again. Maybe they are loaded, but if you are doing development you sometimes remove them. Saga, where are you? Ah, of course it's this one. Okay. You see now I have a number of libraries here and here I have a lot of modules. Because I switched the version just before the presentation, he didn't recognize it. That's my error. So those are the tools. That's what Saga is really about. There are a number of things you can do in the interface. You can get that review, you can do scatter plots, Instagram map views. You can make a print layout although I would not recommend it. In most cases I would say do your analysis here and then export it to QGIS and make a beautiful map there. Saga is not really meant to do this. It can be an option if you just want to show something quickly, but far from that. So more important here is the tool library. And if you go to such a tool, if you click a tool, you will get a window here where you can set a few options. So for example, if I go back here, I have a small digital elevation model of Mount Helens. I can go to a tool here, Terrain Oilers Lightning, and I take a hill shading. So if I open that tool, here I will get the options. So what are the options? I need to set which grid I am going to use, what's the model, and then I will execute it. It's finished now. I don't see anything changing, but if you go to your data, you see that you have another data set, which we can show on top. If you want, you can make it a bit more transparent, for example, than you get. If you like this, which is a bit more beautiful. This is just a very simple analysis, but if you would look, there are much more complicated models. We also have a command line, and the command line is doing exactly the same as these modules. So before I first selected the module, and then I set some properties, and then I run the analysis. If you run Saga command line without any option, you get all the two libraries with their names. Then you can take one of these libraries, for example, shape polygons. Then you will get a list of tools which are in the library, and then you can run one with it. Or if you just choose one, then he will tell you what options are, and then finally you can run it, and you will see blah, blah, blah, shape shapes. It's okay. This goes very fast if you do it like this. The other interface is basically they allow the same thing. Can be accessed to R and to Python. Another interesting one is QJS processing. Let's see. There are some people from QJS here. If you look at the QJS, you will also find a lot of these Saga modules. What's quite interesting, it's actually the same story as what I'm going to tell later, is that you can connect them, so you can actually make a whole flow with different steps run after each other. You can actually even add modules which are not from Saga but from OTB or something else which is supported by processing, by the processing interface. There are some disadvantages, at least now. We may fix them in the future. One is that, for example, if you use shape files, if you use vector data, you will always save to a shape file, then read it again for every step in the process. Whereas, if you stay in Saga, the file will stay in memory, so it will go faster. The Zoo project, I just saw this today. They have made an interface to Saga which you can run as a WPS, so a web processing service. Very similar here. You see the name of the module they show. It's, again, analytical hill shading. Seems everyone likes to use that as an example. You can set some properties and you can execute the analysis. Here you are. They use Saga in the background on the server. We have some new things in the GUI. One thing, very interesting, I think, if you are doing real work, this is already there for, I think, two years in the meantime, but many people still don't know it, is that we have some direct support for going to a Postgres database and using it, so you can make queries and use the results immediately in Saga. These are some terrain analysis tools which are, yes, for some of you, maybe familiar, I think, for most of you, including myself, I don't really know what they do. Same thing here, tools for remote sensing. Also classification. Here they are making an automatic classification based on aerial imaging. There has been development because there have been some climate-related projects recently to work better with net CDF data. Then we have a number of things which are, well, actually, we have a new 3D viewer, which can also be used as a globe, and you can also add 3D shapes to it. Now they are a bit spread around, but no, you can now use them together. That's a big difference. Before you had different modules, and now they are together, the screenshots are a bit outdated. We are working on, but this is not yet finished, on making a system where we can make 3D rosters, better supported, for example, for geology data. The major problem or the major thing there is that we have to change the way how we handle our roster data, because we're used to take everything into memory, and that becomes very big usually, so there are some challenges. I don't know how much time I have left. What? Eight minutes. Eight minutes, okay. So then the bigger project to which we are working, and actually we have the backend, but we don't have a very nice GUI yet, but it's already usable. It's a kind of model builder. That's exactly what you saw at processing, but the whole thing, I did that by combining a few models, you can create your new model. Here you see a nice picture of how it looks in ArcGIS, I think. So okay. Actually this is how it looks in Saga. We want to do this analysis, and it's an analysis which has different steps, and actually how do we make such a module in Saga? Well it's quite easy. You just run all these steps like you would normally do, and if you then, wait, oh yeah, and if you then look, if you click on your data set, and then you go to a tab which is called history, where you can exactly see what you did before. This has been there forever. It's very interesting because if you don't really remember which parameters you used, or which modules you used to get your result, well here you can always see it back. It's saved along in a file, so it's actually like if you want to do reproducible research, at least you know what was done. So we were thinking, okay, how can we implement a module, or a model builder? And at first idea is, hey, why don't we use this one? Why don't we just say that instead of like building everything, we just try out, and if you're happy with the result, we right click and then we say, okay, make this a new module. So that's kind of what we did. You see we have different names of tools which were run. I have this as a TWI, so it's to calculate like how wet the terrain is expected to be. So we calculate the slope. We calculate specific catchment area. And for that we use different input files. And it's actually, yes, if you right click on your history, you can say save as a tool chain. If you do that, you give it a name, it's an XML file. Actually you can immediately run it as a new module without having to program anything. Is it perfect as it is now? Well, actually, yes, it gives you all the modules, but you have to fill in which were the input files. And actually the next step that we are doing now, which I'll show in the presentation, now you have to fill in, well, I don't know, four times the elevation because we actually used four different data sets. But in reality, if you run that module, you will always use the same elevation. So we will change it, or that's what's described in the next steps, to work with the same data set. And that's actually the part where I say we're still working on it. It's quite easy for us to do it. You have to change some things in XML, but that's not so user friendly. The nice thing anyway is that if you have done it, if you have your module ready, it will just show up. Wait, where is it? Tools, tools. Oh yeah, you see here your tool change. For example, here I take such a project. Once you have done it, you can use them like any other module, so without really having to program. So that's the major concept. I'm not going to go really into detail. This is more a description of how you can do it, but I think it's a bit too specialized to give it here an introduction. Yes. So total terrain wetness index from digital elevation model, it is now here on the tool change. And so it used to be a separate module. We have removed the separate module because we could make it script, which is more interesting because people can edit it or a script or it's actually just like the XML file which has different steps. This is another example where people have been doing this object-based image analysis and they also published a paper, but they also published a script. So it's a kind of reproducible research in that way. So that's this paper. Oh, I'm fast. Yeah, I'll tell a bit more or I'll show a bit more later from Saga itself. So wait, I'll first do that. So I told you we have a 3D viewer. You see? Nothing very fancy, but it really helps if you are doing things with terrain to just put it in perspective. You get the same map that you had before in 3D and you can look. Instead, you can also open these 3D shapefiles, put them on top. So that's a feature of the GUI, which is actually quite nice. Yes, apart from that, maybe some news from the project. That's why I was going a bit fast over this whole thing. Where's this presentation? Yes. So actually, this has also been done in Saga because they have good tools for point clouds. They have also made a point cloud of a skull, which was found over an archaeological site. So actually, it was very useful for that as well. But final thing, which I want to tell you, our website, saga.js.org, you can now load it. Because recently, we changed to Git. You can even go to, I have a mirror of the official repository, which is still at Source Forge if you prefer using GitHub because we definitely accept new contributions. Some people have made modules, but they have never published them to the project for some reason. I'm trying to collect them now and put them in the project itself. Also the documentation of the modules, for example, is there and it's quite easy to update. That's something typical. If you want to use a module and you try it and you find that the documentation is not enough, you can always improve it. I think that's it. Yeah. So, yeah. Oh, yeah. Can I just say one more thing? Yes, of course. Otherwise, I'll probably get the question. We just did a release. Oh, the slide is wrong. Yeah, okay. So this is false for G 2016. It's not false then. So we did a 2.3.1 release recently. And actually we're trying to keep this 2.3 series a bit more stable than it was before, especially for the people from QGIS so that they don't have changing things too much. Yeah. Questions from the room. I have two questions about the QGIS processing chain you said, where you can couple different modules, for example, from Zaga but also from other packages. Do you also receive some sort of script there when you couple different modules from different providers? Do you receive a script? Yeah, like a Python script that you can then... I think you do. I think you can do it. But I don't really have too much experience with it. But I believe I've seen things like that that you can right click and export as a script or it was maybe a goal and it's not yet implemented, it's also possible. Somebody else knows the answer? I'm not taking care of the model parts and no Python scripts. Okay. I don't have two experience. I'll check. And the second one... What you definitely, I think, can do is if you have such a model file then is run it through Python or something. But actually I wouldn't recommend it too much because you always go save as a shape, reload, save it again as a file. If you work in Saga, if you make a tool chain or if you make a script with all the different steps, you can do it in such a way that the data remains in memory so it will be much faster. Yeah, but if you use different modules from Saga... Yes, then it's not possible. And the second thing could you put up the slide with the paper again from the object-based classification? Sure. I'm not involved at all, so don't ask me anything about it. No, no questions. I will just write them. Okay, thanks. Is it this one, no? Yeah, I think it is. Yeah? Okay, I can put the slide but there's no reference here, I think. I can look up the reference if you want. And what was the other reference that you showed? Okay. Yeah, okay. Yeah, maybe I will ask for the reference. Yeah. I'll put the slides online. Okay. Okay. The Fossilum slides are already online, they are slightly different. No, they are quite different. More questions? Yes? Johan, do you have any plans to build or develop a programming bridge just like the Python bridge that you have for grass? It would be quite useful, I should say. Yeah, so we have something which is auto-generated, which is not very usable, I think. There's a Python interface which is generated to SWIG, which is usable, but it's similar to writing. You can compare it, I mean, those people who know Python, if you use like the FDO module, Python FDO, so you have to write it like you write C, but you do it in Python, more or less, you have to open a data set and things like that. So that's a bit comparable there. It would be nicer that you have an easy to use interface like your Fiona instead of OGR. I understand your question like that. Is it correct? Yeah. There are currently no plans. And to be honest, actually, I think the power are the modules which are finished, so I think we should just have an easy way to access those modules. I don't think it should be that hard. I don't think we should open the rest of the API to Python. I don't think there's that much use for it. I think it would be useful to perhaps have a way to interact with these models. Something to think about. Now that you asked, yeah, probably could be done quite easily. Okay, I expect some bugs tonight. Or a new feature. Yeah. You understand this from the perspective. Yeah? Try to understand this from the perspective of someone that is working with a tool like PyWPS. That's the use case. I saw it was you who was asking. I thought so. Yeah. Actually, I'm thinking about having an interface which is closely related to the Saga command interface, but from Python, where you can access those modules, get information from the modules and run them without having to think about open a data set or I think it shouldn't be too hard. Yeah. So maybe I should stay another day. And you too then. Join them. More questions? Okay. Thank you for being here. Thank you. Thank you. Thank you.
SAGA (System for Automated Geoscientific Analyses) is an open source geographic information system (GIS) used for editing and analysing spatial data. It includes a large number of modules for the analysis of vector (point, line and polygon), table, grid and image data. Among others the package includes modules for geostatistics, image classification, projections, simulation of dynamic processes (hydrology, landscape development) and terrain analysis. The functionality can be accessed through a GUI, the command line or by using the C++ API. SAGA has been in development since 2001, and the centre of SAGA development is located in the Institute of Geography at the University of Hamburg, with contributions from the growing world wide community. This presentation will show some of the newer modules of SAGA and how these can be combined to scripts and toolchains to reproduce different steps of an analysis.
10.5446/20294 (DOI)
Alright, so I'll start with a little bit more detailed introduction about myself. On the internet, I'm usually known as the Geomuse guy, an OSGeo web mapping platform. I work with another guy from Minnesota called David Bittner, and together we run DB Spatial. I also have a habit of doing reckless things with automobiles. The first example is when I crashed my first rally car upside down in the woods. You can't really tell because this was about one in the morning. I am from the great state of Minnesota. If you've been to the US and you've never been there, it's because it's the part of the country that's really, really cold and barren and I don't blame you. But it's where I'm from and how I party. As a matter of fact, I'm getting to GIS in a minute, but first I need to tell you more about other things I really enjoy. Rally, it's a little more popular here in Europe than it is in the US. Last time I gave this presentation, I really needed to explain. But there are two flavors. The first kind is the kind where you take a car, like the one I'm driving over here, and you try and not miss trees. You drive through the woods very, very quickly. On that particular day, my car got up to 100 miles an hour on gravel roads between trees. The Audi Quattro is a really popular example of that kind of a car. But there's another kind of rally called a tour rally or a time-speed distance rally. The idea is you want to arrive at a very specific location or a very specific set of locations at a very, very specific time. This is actually how rally competitions started. You travel on average about 150 miles and you want to check in to up to a dozen different checkpoints at exactly on a second or a half a second, depending on how it's being scored. Speeds average about 45 miles an hour. So you're doing roughly 70, 80 kilometers an hour and not missing a single second when you get to different places. If that doesn't sound too tricky for you, you're welcome to come out and try it. There are local clubs all over the world for this kind of stuff. All right, GIS. That's what you've all been promised, right? GIS work. It's not going to be me standing up here talking about cars. Though when I explained to my wife that I'd be talking about rally and then tomorrow or on Friday I get to talk about motorsport, she's like, what kind of conference are you going to? No, it is about GIS and here's what we'll get to it. Planning a rally can be very, very difficult. Finding interesting roads to drive on, making sure they're all interconnected and keeping everyone on time is actually a really, really tricky task just to set up. So I sort of thought about it and I thought, well, sounds like a problem a computer can solve. I've got lots of data. I've got curves. So it's all basically routing. But what do we do right now for routing? What is the classic routing use case? It is to take the least amount of time or to get to someplace the fastest. So you're minimizing distance and maximizing velocity. And really, that's kind of boring. Particularly if you've ever been to the US, you might be somewhat familiar with our interstate system and it is composed of these kinds of lines. It is a giant network where you set the cruise control at roughly 100 kilometers an hour and you hope that there's a decent radio station on within the next 100 miles. That gets kind of boring but it's also the fastest way to get from place to place. So whenever you enter directions into Google Maps, that's where it's going to take you. So the next question was what makes a road fun? How do we classify something as fun? Before we even get to the computer science part of it, how do you actually have fun in a car? So the ways you don't have fun are if it's really, really long. There's lots of traffic or there's people complaining when you drive past. If you go down a row of homes, someone's likely to call the polize. But if you're having fun, it's twisty, it's turny, you've got some speed but you don't feel completely dangerous and there's gravel. Now I know I'm completely subjective but I put a disclaimer in the slide so that you know that the gravel is disclaimed by the presenter. Which led us to math. This is actually something I got to learn about while trying to solve this problem and it's called tortuosity. And it is a way of relatively measuring how curvy a line is. And it's basically the distance of the line. So how long it is if you were to calculate it out. For those of us familiar with post just st length versus the distance between the starting and the end points. A boring example, a straight line has a tortuosity of one because the starting and the ending point that distance is equal to the measured distance of the line or the calculated distance of the line. This is a more fun example because there's a curve. And my artistic skills in present are very obvious here. If anyone needs to contract for a good looking hand drawn art presentation for half a circle I'm a guy to find. And you can see here we do all of the math and it's got a higher tortuosity value. Which means it's more fun. So what did I do to find some roads? I got the open street map data from Minnesota. Use post grass, post gs and pg routing. Which if you're new to the open source stack there's great documentation on how to install all of that stuff. But after you've installed it is where things can get a little awry. For example I found that even loading up the data can be problematic. There are a number of different ways to create topologies with the various open source tools. The two that I tried were osm to pg routing. But I got frustrated and went the easy way out which is osm to p o. It worked really well for these examples but I need to completely disclose that the license isn't particularly open. I tried to find the source and get some more clarity on it but it's not what I would consider a libre and free tool. So to open up the data all I had to do was figure out this very simple command. And if you were doing the same thing you'd have to go through and discover all of those command options yourself or like a good presenter all of my code is on github. So there's shell scripts that make everything easier. All right. So get all the data loaded calculate the tortuosity which is a really simple script again also available in github. And we finally get to the point where you can make a topology. This is where things can get a little risky because as I mentioned before I'm generally a developer I like to code things. And now I'm going to use a desktop GIS and I'm going to do it as a part of a live demo. So realistically folks be warned. All right. Here is right C I have it on the wrong screen. Basic computer use I have it down. All right. So this is the map that I got from those first few steps. This is where each one of these purple nodes is a node in the topology that was generated by OSM to PO. And each one of these lines is thematically mapped to measure the amount of fun. The darker the orange the more fun it is. For example down here there's a red road and there's a few others. This is only a medium road in terms of entertainment. But this was all generated by a bunch of SQL. And we can get my presentation back on board here. And so I had all of the data that I needed. So the next thing I wanted to do is be able to route. And route based on that fun calculation. So created a Python script that takes in two geometric points and it will create a route. Now to solve that route one of the best parts about these rallies that go for miles and miles and miles and you're paying attention to the road and your speed the entire time is that generally speaking in the morning you start with coffee. Then you do some stuff and step three is you end up beer. So in every one of my examples I'm giving you I picked a coffee house that was some distance away from a bar. So we're going to get from coffee to bar taking the most entertaining route possible. Now if you're going the reverse if you're going from the bar to coffee that should probably be a public transit route. That's not covered here. I don't think there's a way to find a more fun train. So back to QGIS for this. Which is conveniently already up on the big screen. So the first one is actually a rally that I have run. So this is you're seeing real data in action. And I'm going to turn off so you can see the difference in the lines. So the green line is the most direct route from a tequila bar that I like in Duluth, Minnesota to a bar that's basically in the middle of nowhere in a town of less than a thousand people. But it seats 300. Even for the third of the town in that bar. And that's the most direct way to get to it. On the other side what you'll see is the purple line which is the most fun way to get there. When I did some statistics on the route about 75% of that is gravel roads. Some of them I know for a fact are used for logging so they're barely wider than the car. And you can just go drive up there using those directions. The other example I have is one that's more by a populated area. This is in the Twin Cities which is near the largest population center in Minnesota. So again you start at coffee. You end at beer and you go roughly halfway across the state of Minnesota doing this. And again you can get a kind of contrast in the difference between routing for speed and routing for fun. And it's all about having those squiggly lines that would look like static if they were sound. All right so while I did this basically for trying to find fun roads to drive on for myself and my friends who are into that kind of a thing. You can really use this technique for any sort of routing where you have a very specific cost. Almost all of the tools can be recycled in order to do so. And I really hope that by checking this stuff out or using some of my code on GitHub it can actually save more people some time and headache when they're trying to figure out how to do these kind of projects. Also if you are inspired by trying to do some fun driving and want to meet some interesting people, you can always try rallying. Again there's clubs all over the world. That's my local club from back home. And thank you very much. So thank you very much Dan, so there are very few time for questions now because we have quite a session for three but we need to go and ask the fun of the project. Thank you very much. So, one of the things, so I can repeat the question. So, one of the problems with calculating tortuosity, which I also ran into, is that it is scale dependent. So, if you have, it's in a sense an average. So, the longer the line is, if you had a really curvy bit in the middle, but it was straight otherwise, you could get a misread on the tortuosity. So, one of the things I did when calculating it is I actually wrote some code, and you'll have to forgive me if it is not on GitHub, to actually break up the maximum allowable segment length. And I think I set that to somewhere around three or 400 meters, so that you were never calculating the tortuosity for a no-to-no connection that was more than three, 400 meters long. Because I ran into that exact same thing, which was because it's particularly frustrating where I'm from. Most of the roads are designed on a very grid layout, just because we have the space. So, you can have one road that is a node that is three miles long. So, you're kicking on five kilometers for a single segment in length, and end to end, it doesn't have a very high tortuosity, but somewhere in the middle, it runs through a forest. And at that point, they were going around ponds and ditches and logs and things, and then it'll have kind of a big deal. So, yeah, I ran into the same problem, but setting, gritting your data or just adding the extra nodes, let me calculate it more cleanly. So, thank you for the presentation. No, it's on. Thank you for the presentation. I was wondering maybe I missed it in routing. You generally minimize the costs, and in this case, you want to maximize the fun. How did you translate the fun into the costs? Select max from fun, and then invert the data set. So, then it would do the minimal. I use subtraction. So, I found the maximum fun level, and then inverted the column so that all of the PG routing algorithms work correctly. So, it thinks that, so there's two columns in the data set functionally. The first of which is the calculated fun, which goes from zero to much fun. And then there's a second column that does a little bit of math just to invert it so that the most fun roads have the least amount of cost. So, that way, when PG routing does it, it's still trying to minimize costs from A to B, but what it really ends up doing is picking the roads with the most fun. So, does the fun somehow correspond to the length of the road, or is it completely independent from the length? The fun rating corresponds to two factors. The first of which is the tortuosity, so how curvy it is. And that's a relative measure that is scale dependent, but it's a relative measure. And then the second is I added a multiplier to make it more fun if it was a gravel road. So, if OSM flagged it as the service type being gravel, it got like a double rating. Because really loose gravel roads in the US have the car going all like this at high speeds anyway, so it's more fun that way when you think you're going to crash. Thank you. Thank you. I had two comments. Would you consider bringing vertical curvature as well because a road that goes up and down should be more fun than a flat road? Yes, I did, and ran out of time. The other side of that is where I live, the elevation changes are not as substantial as they are in various parts of Europe. The highest differential of a road is maybe 300 meters. And another small comment, maybe bringing the amount of snow or ice that could increase the fun as well. It very much could. Luckily that can be done based on the latitude in Minnesota. But, frankly, from the bottom of the state as you get farther north there will be more snow and ice. Thank you for the presentation. The notion of fun is very specific in this case. I was just thinking, have you given some thought about Sennigkraut to provide a Sennigkraut to end user with some landmark attributes that will describe a point of view. So instead of taking just geometry, you have used some attributes like the gravel, applying the same model to a Sennigkraut for the best point of view if you add the data, like landmarks and a point of view or a nice church or whatever. So a lot of times when we're doing these types of events, you find yourselves out in pretty scenic places. But that's actually a suggestion someone else gave me while I was just spitballing this presentation, is how do you find the most scenic route someplace or the most beautiful or the most landmarked. And ostensibly if you had the data you could go from landmark to landmark as your waypoints and calculate times and things like that. So one last question. I just want to ask, I'm sure there are other measures for curveiness. For curveiness, as far as I know, so I don't remember everything, but I know that there are ones based on the scenus or closeness or something like this. Did you also consider those kind of curveiness? I went through a couple of different methods for calculating curveiness, but to make it computationally realistic without having to do a lot of, I found some of the others to be too slow. This was a very quick way for me to calculate the tortuosity and then classify it. Some of the other methods I found required doing integrals over the whole of the line. And I didn't really want to add another inset loop for calculating that integral, even if I worked out the algebra for it. This was easy to translate into SQL and was very, very fast. I have to dig it up, but I'm sure in cartographic generalization, this kind of measure is very important to make readable lines in the alps and things like this. So that's why I know a little bit that there must be some other. Thank you. Thank you for your presentation.
Americans have miles and miles of open highways. Set the cruise and drive for hours in straight lines. Routing a user to find the fastest way from an origin to a destination is performed by any number of providers. But what if you're not interested in the shortest route? What if you're looking for the most scenic route? The most entertaining? This is a problem for planners of regularity rallies. Planners hope to provide challenging roads at legal speeds that avoid heavily residential and developed areas. OpenStreetMap, PostGIS, QGIS, and pgRouting to the rescue! Learn how to create alternative routing-cost structures to find and create new types of routes. An affinity for routing, driving, and/or gravel roads a plus. Dan "Ducky" Little (dbSpatial LLC)
10.5446/20291 (DOI)
Next speaker in this round of presentations is Hans. And he's going to tell you all about the most important thing, what happens if you open up your data. Yes, thank you. For those of you who saw my talk in Como, I apologize. You will hear much of the same blabbering as last year. For your sake, I've changed some pictures to some other pictures. And first of all, I'd like to state that I'm not from the Danish government. People keep asking me, did I work in the government? No. Did I impress them to set free their data? Yes. Did I think I had a right to do that? Yes, because I used to produce most of the data they bought. So why not earn money on it twice? I'm going to tell you a bit about the Danish basic data program. So we'll talk about what did we actually have in Denmark. Then I'll show you a bit about our free data, what we can do now, and some of the impacts I have seen and some of the impacts the public admin guys I talk with every day have seen. So the Danish basic data program, when you need to set free data, you need to make it sexy. And for politicians, it's all about economics. So why not make a basic data program instead of just uploading it to an FTP? We can make a lot of politicians and economists write a lot of stuff. And it turned into the Danish basic data program. Basically it's core information. So we have a definition now, what is it we're going to free? And it's about everything. But most importantly in government, it's financing. So we get some money for doing to do this program. And then it's also, most importantly in Denmark, at least, it's linkability. Because we used to have all these different silos, different registers, going back to the 60s, having information on the same stuff, but different places. And some genius decided why don't we, when we set these stuff free, and are going to use it more efficiently, why don't we link everything? And this is where I come in, it's distribution. So we collect everything, we save a lot of money on collecting it into a single distribution platform so that everyone can get it through a uniform API and all the stuff we like, OTC filters through the data distribution platform or the data distributor. And most importantly, it is coordination. And this is the group of people. This is one of the images I changed. This is the group of people who wrote the Danish constitution. So you can say we have a coordination committee writing the data constitution of the basic data program, deciding what do we do, when do we do stuff, so that we don't do it in different orders and everything goes according to plan. So there's all the boring stuff, that's politics. It was necessary to set free data. Then the first of January, 2013, they pressed the button and we were ready. That's coincidentally the same data that my company was founded. And what data is free now? Well, here you see the coordination party of Danish king Christian IV and everywhere the king goes, he has these two guys throwing out all the gold to the people. And that's the same with the data now. Everywhere the government goes, he has two kind guys, Jan sitting here and Anas somewhere, throwing out all the gold to people like me, you can see me here, and my colleagues. And we're picking up the gold that the government throws everywhere and trying to make it into bread. And to be a bit serious, it's geodata, all the geodata is released. That's from cadastral data to within 10 centimeters where every manhole is, where the corner of the house is, everything we registered in Denmark, that's much of the data I used to produce. And it was actually released through a download portal, boom, which they open sourced or kind of open sourced and shared with Cardwagger in Norway later, boom. And they opened a lot of the web services that they had already in place so that everyone could suddenly use them. And the funny part was that everyone was using them, but they were using them for clients, but we'll come back to that later. Then they opened up company data, we had to wait quite a while for those, but it's open data now, it's open data both in a web and download services, they're using Elasticsearch to power quite a powerful API. And then they said, again politics, they said they opened the address data, actually it was opened years before and it was freely being distributed and it was kind of the reason why they opened up the rest because it was quite a big business case to open up the addresses. But let's see, it's all the data we had on hand was opened. And what could we then do? Well, another picture of famous Danish industry men, it's called men of the industry, and we could do everything in private business suddenly. We were used to do a lot of business with the public admin and we couldn't sell anything to the private companies or to end clients and actually we could, but entry fee for starting up the project say using part of the data covering mainly island of Zealand, for instance, would cost us five million Kroners, that's a million, a million euros. And then we had to test to see could we get anyone to buy this. And if you haven't tried it, I'll encourage you to go to another company and say, for an entry fee of two million euro, I might give you a better product, I might give you some added value and see the faces of these guys, all the bosses. Instead now we can do what a PS Coral, the famous painter who did this did. He went to this guy and said, I'm making a large painting, could you please pay me for standing in the front, and he went to this guy and said, you don't have that much money, but the other guy is paying part of the painting so you can stand in the middle. And probably this guy didn't pay anything, but that's how it worked back then. That's how he earned his money. He kept collecting money from all these different small sources and that's what we can do now. Instead of finding 15 million Kroners somewhere, we're picking it up on the street, 10 Kroners and 10 Kroners and 10 Kroners from our clients. And now we can also enrich our own data sets. This is an example of where water flows. We used these data, different kind of data sets to calculate the risk of flooding for insurance companies in Denmark. And we could do that before, but only for municipality because they had already paid for all this data. And now we could enrich our own data sets. So we know something, for instance, where have we seen flooding before? And the government knows something else. Where does the water flow? And usually those two data sets are pretty correlated and we could actually see it now and we could use these data sets enriching our own data to predict where we'll be flooding next year or next time we have a great rainfall. And you could do the mash-up and combine. So this is the same. We have different public data sets and if a truck full of beer will fall here, where should you stand and collect? You might call this guy or this guy. It's always good to mention beer here at Phosphogy because usually I say it's a truck full of shit. But that works better at environmental conferences. But let's say it's beer and then we could calculate this. And actually it's quite used out there right now. We tried to push this product before the free data and nobody wanted to pay the entry fee again. Nobody wanted to take the risk. But now everyone can see that paying 2,000 euro for access to a service that provides them this value added calculations is a good idea. And then you can do big data. And this is also a famous picture of a sick woman because I hate the word big data. We've always been doing big data. But let's say you can do big data. You could take every part of the government data set, put it in there and maybe calculate what she's reading or something like that. But you could do big data stuff and earn money on that without having to pay a large entry fee. So here I usually have demos but we only have 20 minutes. So what have the impacts been? Well, you heard much of it from my legs just before. It's the same in a big government institution as in the big railway company everywhere else. But let's just go through it. I have some numbers for you. Kindly provided by Anas Rohauke from the map supply of the Danish geodata agency or the agency for data supply and efficiency as it's called now. All of 2012 they had 800 users. That's mostly one municipality, one user, one ministry, one user. And then me as a partner and John and other. So 800 users. And all of 2012 they had 8 million requests. And that's everything. That's WMS, that's WMTS, that's WFS, that's REST services. So one call on port 80 to the web servers is one request here. And then of course things broke down after 2013, first of January because suddenly it was much more popular than people would even I could imagine. And I'd been standing there yelling for years that they should open up because it would be used so much. It was quite a big load and to date they have 2 billion requests end of this month as far as I understood. And they had 60,000 unique users. I'm going to get back to that but opening up data is really about changing this number. It doesn't matter how many requests I do but when I need the information I need access to it. So 800,000 unique users compared to 800 unique users is really something I think. And Kudos to John and his staff for really managing to keep a system live with this amount of requests. So we wanted data to be free but somebody wanted to know who are using the data. So we kept using the old system, don't pay more for a new system, just create open users. So that's why we know 60,000 unique users and that's why after doing a survey from these unique users they actually knew who were using the new data. Only 24% were companies and we had 6% more public admin people who should have had access to the data before but might never have heard about it or maybe their part of the government hadn't paid the other part of the government for rights to use that part of the government's data paid by the government. So I was almost starting singing there. Then we have NGOs, 3%. Then we have the ordinary Joe, 66%. That's the guy who sits in his basement. Doesn't exist, we heard yesterday about Steven. But there are guys sitting in his basement, he's called Peter Bollersen and I know he's watching, hi Peter. That's him and some other of his guys, his friends, they're sitting in the basement trying to make new products out of stuff. But that's quite, again I think that's quite good that we open up data not for me to earn money only, that's also fine, thank you. But for inspiration also to others. And the survey said that 44% of the users would not have used the basic data if it were not free. I guess you can up that number but 44% were honest. And that really says something that people wouldn't even have used the data. If it was one euro they wouldn't have used the data and wouldn't have registered to use it. And what I find inspiring is that it's very diverse use of data also. So we have real estate sites suddenly doing stuff. I heard from a single guy in the extreme outskirts of Jutland, that's where I'm from, extreme northern part. He made a graveyard planning software from the open data because you need to plan where you bury people, you need to know where they're buried and why not use the data the government always collect every summer and do graveyard planning from that. Then you have the ordinary flight simulator plugins fly over the country, cool. And we had a lot of amateur historians who suddenly found places in Denmark they wouldn't have found because they sit and look at the height model and light it from different places and then suddenly you see an old burial site or an old town or something like that. And it's clearly visible in the terrain if you know what to look for. We have quite a few guys right to us, I don't know why they wrote to us, but to us about services like that. And from my chair what's changed the most is the innovation process. My old job used to be going to municipality and saying what's your problem? Sit down and they tried to explain to me, well, I think I need a technical solution to this and this and this. And I would sit and listen honestly quite often not understanding what they were trying to solve, but I had a technical definition and I would go back and make a problem formulation and then go back and sell it to them, make a product that didn't solve the problem, go back, reiterate and then make another product that solved the problem and then everybody was happy. Now what we do is the opposite. We make the product before people think it. Of course we can, if someone come to us and say I'd like to solve this and this, we do that. But mostly now we sell actually products and solutions that we've thought of. So we can ask 10 clients and make a product combining those 10 clients demands. And we most public admin actually have quite a good effect of that because it lowers the price and we share. And most importantly for the public admin people, we take the risk. So I can use one hour with the dataset previously costing 50 million kroners. Find out I have a great idea. Mostly as most of you know, the idea sucks, but I can do that a thousand times. I need to do it to find the one great idea I can monetize later. Just to say something about the economic gain, then I told you the addresses were officially freed. They were free already. And they made a survey after they freed the data and looking here, that's the addresses. This is the first distribution line. So we had companies buying addresses and this is the second that the companies they sold the addresses to. And the freeing of addresses gave a direct gain. So no fantasy politician money, but a direct gain of 14 million euros a year when they freed those addresses. That's 100 million kroners. And that's quite a bit. And that's only in the first and second link. So that was kind of the kick of the door to open up freeing the rest of it. But that's the direct gain and that's one of the few parts where you can actually see releasing a single dataset, what is the added value. The effects we've seen has also been better data. This is a new painting or an old painting from Skane, the extreme northern part of Denmark. These are fishermen helping each other with the life saving boat. So they thought that if they made a communal life saving boat, they might help each other. And that's the same with data. Once the boat is in the sea, everyone wants actually to make the data better and they want to help you make the data better. So instead of the fear that the people will hate you, they'll actually try to help you. That's quite an experience for people who are afraid that we're going to see quite a lot of shit storms on bad data. And they'll get better insight. Oh, I'm sorry about the contrast in the picture. These are the two. This is Dennis. I don't know if you've seen Monty Python quest for the holy grail. This is Dennis, the communist peasant. And Dennis could use some of this data to get better insight and ask the questions he actually did ask from the king, for instance, if strange women lying in ponds distributing swords is the basis of a power system. He might have asked quite a lot of other questions once he had had his GIS desktop open and made another analysis and said, why did you place your castle in Copenhagen and not where land price is cheaper or something like that? But as you, so those of you who have seen the film also know Dennis causes quite a lot of frustration because it's not always, let's be honest, not always, it's not always fun having a lot of input, especially if people get angry. They do get angry when stuff is wrong, and especially when they don't understand it. And now you've shared with so many people and suddenly they came back and say, I want more, give me more information. But we already gave you so much or you might fear that that data is used to show that there's a risk. That's what I've been doing. There's a slightly higher risk of your house getting flooded. You might fear that that that you'll end up paying too much in insurance or that the next big motorway is going to be placed right on top of your house or that we're going to sell off Copenhagen to the Swedes or stuff like that. And what you need to know is that data will be wrong and people accept the data will be wrong. So as long as you're open about it, then then this really not a problem. But of course it's going to be this the slightest if some of you are in public admin, this slide is the most important slide if you are trying to open up data because this is what you have to fight. So fear and doubt and uncertainty of what the fuck will these nerds do when they get our data and will they see that we've been making everything wrong for years. That is the fear you have to fight. And of course we won't open up. And of course we will because data that's also a thing you have to acknowledge data will be used to show something that is completely wrong. Some student from somewhere will make the timetable data into something that shows that the Deutsche Bahn is never on time and at some time they promise that the regional trains would be faster and now it's as you've shown that there were 0, 0, 0, 0, 0, 1 slower or something like that. It will be used to something that is wrong and people can see through that. And in conclusion, another painting of a Danish king and for the fast one of you is the same king as the first slide. This is King Christian IV being shot by the Swedes in his head standing up saying that even though he's shot, things will go on. And this is quite a good image of what we're seeing right now. We have all kinds of different situations. People think they're hit but they are fighting on trying to solve the problems they've always done in public admin. We have the guy sitting here, that's me, collecting money once again. And really you have all these different kinds of impressions but you need to see that the effects we have seen in Denmark at least have from a public as a citizen side been good. From public admin it's been good and from the private businesses it's been good. There are some stepping stones and there are bad things about it but they are far, far, far outweighed by the positive stuff. And thank you for your patience. Well, not a great presentation in this track. Thank you so much Hans. Any questions for Hans? I see love, Hans. I start at the front. Great presentation. The data now, when it gets publicized, the common demand is that the data is well structured and finished. But often when you free data you get new demands for data or data quality. Do you have any insight to how the Danish government solves this problem of collecting this demand for new structures or value added data? As a government normally wouldn't produce national wide data and not as a business more narrow? Actually, I do and I don't because as you know, we both frequent the same places and it's from the floor and up and it's from up and down actually. So people are working on Inspire trying to standardize the data. People on the floor are actually trying to solve the everyday problems. So when people ask for data, it's usually as far as I can see, they try to supply what people demand. That's a good situation. And then I think it's some kind of magic really. Everybody has turned into a great soup of love. And I think what I see is that people try to actually on both sides try to figure out what we need and what to deliver. Could you please show the pie chart again about the users? Yes, sure. Here. So this is the new pie chart and how is this related to the old users? The old users are only NGO and government? The old users were 99% governmental or local admin. Okay, thank you. You mentioned 14 million direct gain with the opening up of the addresses. How is that being computed, this figure? There's a link in my presentation to the report made by the engineering company called Kovey. And you can go read it. He's standing down there. Did you make that, Peter? Okay, good. You can ask Peter, it's his responsibility. No. Peter, you want a mic or for that? No, it's another part of the company who made it. Sorry, Peter. It's a direct gain. So it's been collected because you knew every distribution company. So you went to each distribution company and actually took the numbers from them and they kindly released the numbers of their clients. So that's why they used it for the first two links only. Because you knew who bought the addresses directly from the government and they kindly supplied their clients. So it is factual number. It's not made up. And the made up number we talked about when making the basic data program were way higher than this. So the economic gain is something like 100 to 200 million euros per year. But that's counting in that you save two seconds each time you enter an address and how much is the hourly wage of a... Yeah. That's right. Anyone else? In that case, I thank you very much and I thank Hans especially, of course.
In 2013 the Danish Government freed most of the basic data in Denmark under the "Basic Data Program"-program. My talk will go through the effects we have experienced so far; The release of data has not only changed what we can do for both private and public sector clients, it has also changed how we do it. As data is now free, we do not have to wait for public sector clients to approach us with ideas - we can now approach all types of clients with products and proposals of our own. An apparently tiny thing as being able to product develop on our own, has turned the business model upside down in many instances. Although you cannot sell a free beer, you can sell the knowledge of how to open the free beer, or a ready-to-use bottle opener, and possibly some consulting on how you can get to enjoy the free beer the most. The wider use of data has also meant that public servants have had to adapt; to face fear of errors and ever more demanding "customers." It is important to acknowledge that opening data up has consequences that public servants need to face. (Hans) Gregers Petersen (Septima)
10.5446/20287 (DOI)
Okay. Hello and welcome to the third and last talk of this session block. We now have Sven Geckos who's working on improvements for label placement, especially for Westerners. Hello. I'm Sven Geckos and I'm working actually at the Fraunhofer Research Institute, IOSB in Karlsruhe, which is one of about 60 all around Germany and they are publicly funded. And I'm a Swiss admin and GIS server, Linux admin, all things Linux guy there. And I'm doing the German metnik style as a hobby. And fortunately, I was able to do some work for this project as part of my day job. And so part of this, what I'm presenting here is, I have been able to do as part of my day job. So what I'm talking about, the motivation for doing this is, if you look at the OpenStreetMap or IGMAP and go to countries where Latin script is not the norm, you will usually understand, you won't understand anything. So the reason for this rule is that the project usually uses local languages for acquiring names and for objects, geographical objects in the map. And so we can't just render names like OpenStreetMap or IGMAP standard style does it, if we want to understand this from a Westerners perspective. So, but fortunately, in contrast to conventional geodata, OpenStreetMap does contain at least some localized data. So we should use this when rendering maps to get a more understandable one. So how do localize objects in OpenStreetMap data look like? I have two examples for country borders, country objects. On the left hand side, you can see the object for Germany. On the right hand side, you can see the object for Israel. So what we want to use is actually something Roman, so not the name tag. And we have this colon separated name tags in this OpenStreetMap key value system where you can use arbitrary keys. And I think what we want to use here, if our target language is German, we want to use the name colon DE tag. Probably we want to have the original name in parentases. So that is our target. So let's have a quick look at writing systems of the world. As you can see, they are mostly dominated by Latin with two exceptions, the Russian three exceptions, probably the Arabian world, the Russian foundation, and all around Asia. So the objective is use all localization for all these countries. There's one in India as an exception because India has English as an official language. So in India, we will always have Latin script alternatives. So here's like our OpenStreetMap Carto style looks like the rural area of Moscow for an example. And here's what it looks like if we add localization. So here's our main objective, making the map readable for Westerners by using Latin script. Use the localized data from OpenStreetMap itself. Whenever this is possible, this is not possible on any object and will never be because many objects just don't have and Latin name. So use other localization methods in OpenStreetMap does not contain the data we want. So use transcription or transliteration. So how will we do this? My approach has been to use PostgreSQL stored procedures, which is an advantage, but it can also be a disadvantage. It is an advantage because it is rendering independent. It doesn't matter if you're using MapNIC, MapServer, GeoServer, whatever for rendering, because all you have to change is your SQL. The disadvantage is, well, if your data source is not PostgreSQL, for example, you're using raw OpenStreetMap files or shape files or something, you can't use this. So here's what the implementation looked like. I have three PostgreSQL functions which can be used. They are placed in their own extension. And this is actually usable for any language, any Western language using Latin script or any language using script, Latin script doesn't need to be a Western language. It is done for German language, but with other Roman languages in mind. So a convenient way to do this is to add database views which look like the original table. So in the best case, you don't even need to change anything in your style. Okay. One thing I need to say, why I have a separate function for a get place name and get street name. The reason is if you add a name in parent cases, you have the local name and localized name and this will get very long. So what I also do is abbreviation of street like ST in English or STR for Straße in Germany. And this is actually a place where I would like to get more native speakers of other language to get these abbreviation code for as many languages as possible. So here is what my state machine like to do the decision which name to use. I have a look at the target language in my example, it's name.colonde. And if we have this, just use it. Now if we don't have it, have a look at name. If name is written in Latin script, well, we are in a country which uses Latin script. So just use this. If name is not Latin script, have a look if int name which is an international name does exist. I am expecting this to be Latin, so just use it. And if this is not the case, have a look if there is an English name, if there is not, well, there's a last resort we have to use transcription. As a short in slide, difference between transcription and transliteration. Transliteration means something which is revertible. Transliterate something, transliterate it back, you will get your original script. This is not the case for transcription. Transcription is more done with a vessel reader in mind reading this, trying to get the pronunciation as close as possible to the original pronunciation. So transcription is what we need because we don't need the reversible factor. And another thing you need to know if you think about stuff like this is there are actually three, probably four, if you consider hybrid forms, classes of writing systems. First, there are alphabets, well-known Latin, Greek, Arabic transcription is usually easy using them. This is also true for syllabaries. The Japanese kana is one example. So it's relatively easy transcription. And then you have a lot of log traffic writing systems like Chinese. Chinese is actually the only log traffic writing system which is still in use. There have been historical ones in the history of humankind. But Chinese is the only one left. And the problem with log traffic writing system is that transcription is only possible by language. And if you do a transcription in Chinese and do a transcription in Japanese for the same characters, you will get something completely different. So this has to be resolved in some way. And last but not least, there are hybrid forms like Thai language or Korean language. Those are all mostly easy to transcript. So a few known problems of transcription I have been encountered. There may be more. I would be glad to hear from more and best would be I would hear how to resolve them. So first of all, the logic logic graphic alphabet transcription of Chinese characters has to be based on the place of the geographical objects. Okay, we are doing maps here. We are having a geographic way our database. So this is not a problem. We can just determine what country we are in and decide based on the country we are in, which transcription to use. Current implementation does this with Japanese. And this will probably need to get extended to some other scripts. Another thing is Thailand uses a royal Thai general system of transcription which you get on local road signs and stuff. Unfortunately, the ICU library, which is publicly available as free software uses ICO 11940, which is something completely different and not widely used. I'm not aware of a free and open source library implementing RGTS. If there would be it would be it could be easily added. Another thing is Arabic and Hebrew, for example, do not add all vocals to the written words. Instead, the readers add them by while reading. So they are they are actually not in the word. So an automatism can't be added without word lists or something. So transliteration will therefore be often incomplete. As an example, I have Tehran using ICO ICU. It gives you trend, which is not what you are likely to expect. Fortunately, in place names, this isn't that problematic because they are they usually have open street map text. So you don't need to go for transliteration. A few words to the current implementation. I am using the ICO international components for Unicode library as a default if there are better ones better ones can be used. A postgresquales.procedure has been implemented, which is actually just the thin layer for calling the any to Latin transliteration form function of this library and is then available as a stored procedure. And we have a place dependent use of transcription libraries for currently doing this Chinese characters, which Japanese called kanji. And this is performed by the Kakasi library, which is also free software. And this is only done if the object is located in Japan and is not used otherwise. This scheme is extendable to other writing system and countries. And we can also add something like usage of different transcription libraries by writing system and not by place, which can also be easily added. Two other problems I encountered by using internationalization. There's no single font available, which contains all Unicode characters. So you have to use different ones and compromise in the current code is to render local names only if they use Latin Greek or Cyrillic. This should be extended. Possible resolution would be, I learned yesterday that map server seven does actually implement this. So I have to have a look. The renderer can use different font spend based on the character set and map server seven does this. I'm not exactly sure about what map Nick currently does. It might also be possible to produce a best of font from various sources. I didn't try this yet. One last slide. Political problems in localization. Political problems are usually unable to solve them by technology. Many regions of the world have been part of other countries in the past. And for example, the German settlement areas in Eastern Europe, if you have even the smallest villages in Poland, Alsace or Lorraine have still German names. Nobody knows if they are still in widespread use or not. In the worst case scenario, the usage will offend people. So the only thing we can do is trust the mappers. The mappers will only acquire with names which are still used. They use old name otherwise. Hopefully they do. What we do as a compromise is always render the current local name in parentases. So prospect and enhancements, technical solution of the problem to render glyphs of different writing systems in a single label. As I already said, this will probably already work in MAP Server 7. Addition of more and better suited libraries. If somebody knows in the audience of a library which might be suitable, I will be happy to know and integrate it. More fine-grained distinction of the transcription algorithms by place. It might be needed to do this more fine-grained. And add street abbreviation code for all common languages. Actually add street abbreviation code for all languages where streets are actually abbreviated. And suggestions from the audience. I will do a two-minute live demo and then I will take questions. Okay. So I will call PostgresQL line. Okay. First I need to create my extension. And then I will show you how to transliterate it. Probably somebody knows these characters. Those are the characters for Tokyo. And you get Dong Qing. Well, this is not very good. So we need to use something else. Okay. I get Tokyo better, I would say much better than Dong Qing. Still not perfect, but well, MAP has already acquired Tokyo in the proper script for us. But if we would use transcription, we should go for the second one. And last, there is the GO-Aware transliteration function. So I will, I just added a point here. And 137.35 is somewhere in Japan. If I add somewhere in Germany, like 949, I will still get Dong Qing. So this is aware of the location and how, whenever the object is located in Japan, it will use the Japanese transliteration. So and if you will, it's easy to also drop the, again, the extension. Okay. One more demo. I take the Karlsgasse in Brak, which is called Kalloba in Czech. And Brak has German names all over the place because of its history. And, but in the real world, you see all, you all, you can only see the Czech street signs. So we add the, we should add the German in parentases. So this is what it looks like. And this is actually what we are using for the map. And as you can see, if you use the get street name function, the Gase will be abbreviated to G dot. And currently I have this abbreviations for Russian, German, English, and Ukrainian. I would be glad to get this for other languages. Okay. So if I don't use street name, but place name, the stuff will not get abbreviated. Yeah. So, okay. So far, questions. Thank you very much, Sven. Yeah, how is it with this to be turned around? Because obviously, like a lot of tourists, like from Japan or Asia come to like Europe and they want to see like the Japanese name for German streets or for German places. This can be likely done, but you need to change the code. But because the code is mostly done with Latin script in mind, there are a few corner cases where it won't work. And first of all, you have to implement something else than you have to extend the ICU repo functions, which currently is hard coded any to let in. This should be any to whatever Thai Japanese. And I think it's not a problem for Japanese because they use Latin script and all that in script. It's probably more for Chinese people. And I'm actually not aware if there even is a transliteration function in the art into the other direction. So it might be possible to do this. You can configure the code to do it, but it will probably not do what you are expecting that it is. So it can be extended to this way, I would say. Hi, great talk. Thanks. Just to answer your question about MAPNIC during the talk, as far as I know, MAPNIC has a fallback mechanism. So if it isn't able to render a place name with the font, then it has then you can provide it with Unifont, for example, and it will use another font set to render the place name. Do you know if this is also the case inside one label because I have Japanese characters and European characters inside one label? That's the problem here. Okay, I'm not quite sure about that. But I think so it is because we are using for our styles, we are using both names as well, if possible, so the local name and the English name. And this is the same label and it should work. I mean, the only problem I discovered was that the Unifont seems to be a little bit smaller. So if you apply a text size of 10, for example, then it works fine for your default font. But the Unifont would be really like almost unreadable. Yeah, so this is something which has to be addressed because I would like to have the parentases all over the world. Do you check intname tag for Latin characters or do you use it right from the database? Probably I'm using it verbatim. It might be needed to add a isLatin function. The code actually contains a isLatin function. So this would be probably just a two line spatcher. So yeah, you know, the first rule of OpenStreetMap is you don't trust OpenStreetMap data. And for example, in Northern Africa, you will get a lot of Arabic names because for some reason, they think that Arabic is pretty international. Any other questions? Okay, one short question for me. All your code changes are online on OpenStreetMap DE or they are in use on OpenStreetMap DE or? Maybe in OpenStreetMap DE there is an older version of the code. The current version will get online on OpenStreetMap DE in a couple of weeks, I would expect. Okay, no more questions. Then thanks again, Sven.
The standard rendering style used in Openstreetmap today produces hardly readable maps in countries where the usage of latin script is not the norm, at least from an average westerners point of view. Our map style uses a renderer independent approach to solve this. We use localization (l10n) functions that create readable names. They are implemented as stored procedures in the PostgresSQL database which contains the Openstreetmap data. The targeted latin langage (german, english, …) can be easily selected. The talk will show how these functions currently work and will give an outlook on potential future extensions. In contrast to almost all legacy geographic data Openstreetmap does already contain a lot of localized data acquired by mappers from all around the world, which should be used whenever possible (Example: japan instead of 日本). Automatic transliteration can then be used as an alternative if no latin names are available in the database. Especially when using transliteration there are many pifalls which have to be addressed depending on language and country. Some of them have already been dealt with by the current implementation and are presented in the talk. Others, which appear difficult or impossible to solve are also shown. Another challenge which exists in localization of maps are political problems. I will briefly describe some of these issues at the end of my talk. Sven Geggus (Fraunhofer IOSB)
10.5446/20285 (DOI)
Okay, welcome to the last talk in this session. It's Astrid Emde. She's quite known in the German speaking chapter of the OSGEO, the FOSS GIS. And yeah, she will talk about Mapender 3. So please go ahead. Okay, hello to all of you and thanks for the introduction, Marc. We stay with the topic web mapping and I would like to show you Mapender 3 and show you how you can provide applications with this geoportal framework. And yeah, you will get to know it. It's quite easy to provide applications with this software. My name is Astrid Emde. I'm from Bonn. So my way to the conference was not very far because I work at Wairgroup, which is located in the center. I'm part of the Mapender team since a long time and I'm active in the FOSS GIS EV, which is the OSGEO local chapter, the German language local chapter. I'm also involved in OSGEO live and my job is to make the documentation and to get Mapender 3 ready for the next version, which comes out every half a year. The Wairgroup is specialized on web mapping and provides solutions and helps you to bring your data to the web. This is our focus, but we help you with other problems around as well. We are a company with more than 20 employees and we have a platform that we provide, which is called Meldemax. It's based on Mapender 3 as well. You can mark problems and ideas in your town and we have a metadata solution, which is called Meta Door 2, where you can follow, inspire rules and edit your metadata. We do consulting and training and help you to get your data published and organized. But now we want to have a look at Mapender 3 and at first I want to show you what Mapender 3 does. And maybe some of you will really get excited because it's a web client suite with an administration interface and the great thing is you can create new portal applications without writing a single line of code. So maybe the programmers get bored now and think, hey, I want to code, but for some people of you here, it might be quite attractive to get a framework where you can configure everything with a web application administration interface. You can create and maintain OWS repositories so you can upload or register all your services, WMS services in Mapender and then arrange them and provide them in the applications. You distribute them and configure the services and ship them to the applications that you created. You can create users and groups and give them the access to the applications and to the services. So we have these three components, applications, services and the rules. This is how Mapender can look like. When you install Mapender, you get 3D more applications that look similar to this one. This is one of them. You get a mobile template that you can use to provide your mobile applications. You see we have different areas, we have different elements that you can provide in your application and you will see you are quite flexible, which functionality you want to give to your users and which services you want to publish. So this is the front end of our application and we will have in a few minutes a look to the back end and you will learn how to administrate. So we have different elements or plugins as you may call them, like you can change the scale, you have a navigation toolbar where you can change the scale, you can use the scroll bar and the mouse so all the navigation skills that you are used from a modern client. You have an overview map here at the corner, you can change projection, you can define projections that you want to support and then the user can change projection. You have a sidebar that you can use to put elements there like the layer tree maybe or legend or red lining and here you can see in this application we have two services, two WMS services, one Mapender user, one OSM demo and you see the green name with a folder that is the root layer of the service and the services have some layers that you can activate and deactivate and maybe you can get information from the layers. You can use the context menu to provide opacity and change the opacity, you can zoom to the service, you can get metadata and everything is quite flexible to configure. So our idea is to provide elements but maybe an element is not always the same, one user might like it this way, the other one likes more functionality so we will see in the back end that you have possibility to configure each element. You could add a legend to your application, you have a WMS loader so your user could add more services to the application while he or she is using it, that looks like this so the service gets to the application and via drag and drop you could change the order of the services as you like. You have a measure functionality, you can measure lines, distances or create areas and calculate the area and these two information you could pass to the print. You have a print element that you can use that is flexible as well, you can configure it as you need it and you can rotate the map and provide templates for your print that you can design on your own. We give some templates already for different formats and you can design it like that did it here in Trollstorff so you add some information about your time, a town and maybe some route rules what you can do with the print and then the map could look like this. Now we want to have a look at the back end and look how easy it is to provide your own application so how does it work? When you want to administrate with map banner you have to log in to do all the work in the back end and here this is how you, the view that you see when you just open map banner and you are logged in then you can open the applications from here and then the application that you saw before you can use it and come there from here. When you log in there at the top you have more functionality, you can see at the bottom there's one application where you have the pen so that means you can edit this application or you could copy it or delete it. You could create a new application, you have a tree at the left with more functionality when you are logged in so you could say new application and create a new application which is empty at the beginning and then you could edit the application with the pen and see here the back end where you can configure the layout that means you can decide which elements you want to provide, whether you want to provide a legend or not, whether you want a layer tree or not and then you can populate all the areas here with elements. An element that you want for sure is a map element so we will have a look how you can add a map to your application, you go to the content part, choose one of the elements, the one we want is the map element, you can configure each element, each element has different parameters that you can set and for the map element you can choose the projection, you can choose the units and you can choose the extent, the start extent, the max extent, the zoom leg mills that you want to provide and other EPSG codes that you want to support. So after adding the map element and adding services you will get to know how that works, you will get a map which looks like this, it is very simple or not very complex and the functionality is not very complex at the beginning but as we saw in the demos there are lots of elements that you could add to an application to get complex applications. Yes, now let's see how the services will get to MapBender, we will be published in MapBender, a WMS service has an address, you know the get capabilities request that you can address from which you can address the service, here is an example, so you go to add source, add this get capabilities request and then add services to MapBender, so MapBender knows about this WMS and knows all the information out of the get capabilities document, knows whether a layer supports feature info or not, whether it is a root layer, group layer and so on or provides legend and with all this information the next steps will follow and you can publish a service to an application. So you go back to your application and say plus and add a layer set, that is the first thing because your maps needs a layer set or your overview needs a layer set that you want to publish in the element, then you choose the service that you want to provide in this layer set and every service has lots of information like the format that it supports, you can see at the top at format, it's image PNG here which is selected, which is fine and at the bottom you see all the layers that are supported by this WMS and now you can decide whether this service should be activated when you start your application or whether it should be deactivated by checking the checkboxes here and so you have flexibility after publishing, after loading a WMS you can configure it how you want to ship it to your application and then you could go on and define a layer set for your main map and a layer set maybe for your overview map and put everything together and get an application which is a bit more complex. The easiest way to build up applications is to copy an existing application because it already includes all the elements like this it's a bit of work to build up a complex application. So now you want to give this application to special users, not everyone should use this application but only me in this case, so you create a user which needs a name, a password and you could give some more information about the users and you could create groups as well and pass the users to the groups and then as a last step you go back to your application and say okay in this case my demo application should be used by Astrid Emde, so when I log in I will get the possibility to view this application. At the screenshot it's not make sense so much because you see in the bottom line this application is accessible by the anonymous user as well so everyone can see it and I would have to delete this last row so only Astrid Emde and the root user could use it. So now you had a look at the back end how an application is configured so it's more or less the things that you have to know and I have some more slides where you can see which functionality MapBender provides and show you some solutions where customers have MapBender in action. So you have a print element and on the other side you have image export so you could add an element which export the map that you see as PNG or JPEG. You have a meeting point functionality, you could add it to your application, the user will click in the map and then you could open a mail client or a link is generated with a link which will open MapBender and write some message in the map. You have a redlining functionality which is not permanent, it will help you to make some red lines and maybe print them and after logging out the information is gone but it's nice to do some sketch. And here you see MapBender in action. At the city of Guteslow they have this nice application, it's for landscape development plans that they provide information about so you could go to your address with this search router which is part of MapBender as well. It's based on SQL so you could configure it quite easily on top of your table and configure a search for addresses or parcels or trees or lamps or whatever you want. So you look for your street, it has an autocomplete functionality and shows all the results in the map and then you could get information about this landscape development plan. Another step to the back end, so you saw this search interface and we have in some elements YAML configuration so I said you don't have to write code, maybe this is for some of you code already but you have with some elements you have to write YAML definition, how your search should look like, so you define the name of the search, where the table is, you define which, how the form should be set up, in this case it's only the column ORT's name and it's required to and it's an exact search and at the button you see which result columns should be offered to the user. Here you can see a different element, it's a solar search that we have integrated as an element in MapBender so you could send a request to a solar service and get information back, it's a one field search. Here in Metropolis Rour you have it the same and there's another solution and here you can see feature info of a WMS service and there's a special functionality, you can provide a service with feature info and when you click on this link you can load from another service more data to your map so you can go to the region which you are interested in and provide additional data by the user. We have WMC, web map context document support so you could save a region or a configuration that you want to use maybe tomorrow again and then you can reload this configuration with a select box. Then we have a complex layer tree, you can see it here in the city of Trostorf which is close by, you already saw that you can provide layer sets, I named them name, main and overview but here in Trostorf they made categories like they said, I can mine information, building a family that are layer sets, whether added WMS services and then you could get this categorized layer tree. This is a screenshot from the WMC again, from the context document and here you can see a functionality where you can provide background configurations, so we saw the layer tree and saw how you could switch in the layer tree from one topic to a different topic and you could do it with a spatial switch also and change configuration or change the map that you see here with the buttons. We saw from the slides that I showed you that mapBender comes in different layouts, so we provide CSS editor where you can overwrite the design of mapBender, the original design is this black design that you saw at the beginning and we have this editor where you could overwrite in this case maybe the background color and the button color and then it will look like this. If we have a nice element it's called HTML element, it's very flexible, you could provide images or links or impress them or so in your application just write down the text and add it to the sidebar or to the top and then in this case with this configuration I added the OSTO logo to my application. We have a mobile template that you can use and populate with your maps. We have a digitized functionality which is configured with YAML as well. You see you can provide very complex forms. The editing goes right to the database in this case without WFST in between so you have to say which table you want to edit in, you can edit all the attributes of your features and you can work on points, lines and polygons and define which functionality you want to provide. Here you see it's a polygon digitized application where you can draw lips, a circle or donuts and you are flexible how you want to set up your digitizer. If you want to get to know mapBender you could have a look here at the gallery and look at the solutions. Here you find the link to the documentation as well. We have documentation in German and English where every element is described and how you can configure it. If you want to know what's behind, how did we program it, it's a PHP application and we use Symfony framework which brings a lot of functionality with it like doctrine, twig, monologue and so on. We use OpenLayers and OpenLayers 2 at the moment still because when we started with MapBender 3, OpenLayers 3 was not finished or was just under production and we go for new releases. After Phosphor G we will publish version 3054 with lots of bug fixes and support for PHP4. We are working on the next feature release with lots of new features and support for a new Symfony version and you will get more functionality in the digitizer. You can edit geometry without data like you can see here. You have a query builder where you can analyze your data and give information and export it with Excel or HTML. If you are interested to join the team, you are welcome. We do hacking events. We do meetings mostly here in Germany but we have been at OSGOO hacking event as well and if you are interested, you can meet us on Saturday at the OSGOO code sprint after the conference. If you are interested in MapBender, come around and meet us. Thanks for your audience. Great talk. Also in time. My complete three talks were in time. I am very happy. So are there any questions about MapBender? There is one raising hand over there. So may I ask you to pass this around? Thank you for the talk. Question is open layer 3 on the roadmap and when might that be? The question was is open layer 3 on the roadmap? Yes, it is definitely on the roadmap. My colleagues already tried in Bolzena and how to integrate open layer 3. We also think about integrating leaflets. So maybe making a layer where it is easier to support different map clients. Maybe in some years there is another product which is very attractive. So let's make it more flexible. I think it will take some time but maybe at next we can present it here. More questions. There is one again in the last row. Thank you. Is it possible to add WFS service layer as a layer? So you ask whether it is possible to have a WFS layer as a layer. At the moment we only support WMS. You saw how you can upload WMS but it is a question which is asked quite regular and we think we have to integrate it soon. As you see with the digitizer we have a feature element already which grabs geometries from a database at the moment but with this feature element it should be easy to support WFS as well because open layer supports it for sure. It is only our backend that has to enable the administration for WFS layers. More questions. We have a little bit of time spare because this is the last talk of the session. Are you hungry already? So I have one more question. My question then. We suggest to have some more time. So you showed how you have this user role group model and which is the element that is secured? Is it the service level or is it at the application level? So you get the question? Okay. So MapBender does not secure your services. If you provide a service which is on the web, MapBender can't hide it but you could set your services behind firewall maybe and MapBender could work as a proxy. So you could secure your application and only provide the application to the special users or groups maybe but if your services are still public, MapBender can't hide them and if someone gets grabs all of the service and gives it to other people they could build up their own application. So the way is you should have to provide your services behind the firewall, tell MapBender to work as a proxy and then MapBender will get the image and give it back to the customer or to the user outside. Thank you. Are there more questions from the audience? Okay. I want to thank you again and all the audience asking questions. Nice. Thank you.
Mapbender3 is a client framework for spatial data infrastructures. It provides web based interfaces for displaying, navigating and interacting with OGC compliant services. Mapbender3 has a modern and user-friendly administration web interface to do all the work without writing a single line of code. That sounds good and is fun! Mapbender3 helps you to set up a repository for your OWS Services and to create indivdual application for different user needs. The software is is based on the PHP framework Symfony2 and integrates OpenLayers. The Mapbender3 framework provides authentication and authorization services, OWS Proxy functionality, management interfaces for user, group and service administration. Mapbender3 offers a lots of functionality that can be individually integrated in applications like redlining, digitizer, search modules. Astrid Emde (WhereGroup Bonn)
10.5446/20284 (DOI)
So, let's continue with the next session. I'm really happy to welcome Thorsten Reitz from Retransform. I met Thorsten in the Inspire conference, I think last year only. Since then, ever when I have questions about complex models, I always try to call him. Usually doesn't answer. I'm really curious to this work. Thank you very much for the introduction. Thank you all for staying for one of the last talks of today. One of the tools that maybe in the phospho-G community in particular isn't so well known, but I've been around for a while, is what I'd like to present you today. So maybe to give you a little bit of context, why are we building some things? Our idea quite a few years back was that we want to build tools that really help making open standards work. In open standards, we often have things like really rich, object-oriented models, like for example in Inspire, all these others that are mentioned here. Encodings also tend to have their pitfalls and tricks. These standards are built for extensibility and flexibility, which doesn't necessarily work with all the existing tools very well. So we said, okay, something that we need to do is, for example, we need to provide something that helps people analyze, transform and validate data sets that they're working with so that they can provide high quality open standards data sets. And so, like by now, almost 10 years ago, we started working on something called Hale. The three core ideas behind that was we want to enable people to first of all understand all these rich and complex models, to provide them with a way to explore these and also the data that you have available in that. Then to make it really easy compared to the other solutions at the time to actually do a transformation. So there's always a source and a target, and I just want to specify how would I have to go from one to the other. And in more procedural approaches, you typically write a script with many, many lines, or you create a pipes and filters graph that can be really complex. And we wanted to improve on that user experience. And the other thing, something that many of you probably know is that people learn much better when they get real-time feedback. So you test something, you get feedback immediately, like you touch the surface of a stuff and it's hot, and you learn immediately, okay, I shouldn't touch that again. So it's usually a good idea to give somebody who works, especially on a complex topic, immediate feedback. So that was one of the design goals. You change something in the transformation, you immediately see what the result is in these views. We also paid attention to make this an open source project from the beginning on. It was pretty clear that otherwise it probably would have been, it was a research project originally, and it would have been maybe not accessible at all anymore. And we wanted to make it an open platform. So also from the beginning on, we documented which extension points are there and how you can actually integrate it into your own applications, for example. A bit of history. Originally, the major funding for Hale came from a project, FP6 at the time, called Humboldt, which went from 2006 to 2011, which was basically like the foundation where most of the original concepts were developed, where there were some additional tests, and where the software started to get a couple of users. That was about here. But as you can see, it continued to be developed. So there were something like 15 to 20 projects in that period where it was used, like for research, for actual different types of deployments and so on. And then in 2014, we actually decided that we can't do that as a site project anymore, and just being dependent on research projects. So we decided to found a company, let's be transformed. And since then, we've actually convinced quite a couple of partners and users to use the software and also to work with us on helping to improve it. And you can see this fat thing here. That's the ramp up to release 3.0, which I'm going to talk a bit about later as well. Yeah, if you wonder what do people actually do with it? I've got three examples for you. One of the larger examples that we're currently doing is in Germany, we have a standard called 3A. So that's Alkes, Atkes, and third one. And that's not used so much. Yes, Arfis, thank you. And the special one about that is that if you think Inspire is complex, then you probably haven't seen that one yet. So it's significantly larger and has far more relations, and relations can be many different types, so they can be expressed in lots of interesting forms. So there were some challenges in this project. But by now, we are almost done with it. And the interesting part here really is that these complete mappings had actually to be delivered, not just in the form of executable transformations, but also as a really well readable documentation. So how do you actually create a readable documentation from something like that? If you had used something like a normal programming language, that would be pretty hard to do. I'll have a look at that with you later on. Yeah, time frame just as noticed. Another project that we worked on was with a group of about 96 municipalities in the state of Hessen who all wanted to have a common solution for implementing Inspire. And for that, they had actually created local harmonized data models, but they needed to do a transformation from those local harmonized models to the Inspire models as well. And so here, the nice thing was really that we didn't have to create 96 mappings, but rather because they had already agreed on a shared structure for themselves. We had just minor variations and could do one basic mapping and a couple of ones that adapt for specific needs of individual organizations. And then maybe one more. Here, the special challenge was that this is a project with the European Environmental Agency. They have a couple of pan-European data sets, like this protected sites data set. And here, the challenge was, for example, that they actually have a few requirements where you need to aggregate all protected sites of one type across Europe into one object. And we've managed to do that in the end, though there were some challenges involved, of course. So what's the principle behind the software? And when I said, why can we actually make it a bit easier than other software? We decided, like I said, ten years back, to take a look at something that was used in the semantic web stack at the time, which is called declarative mapping. So we have two data structures. We actually just declare, I want to pick an element from that related via a function to an element of that, of the target schema. So for example, we say the tree type should become some, I don't know, plant type or something. And apply a lot of functions so that in the end we have all these individual cells or mappings. And then we can do something with them. The nice thing about such an approach, not just that it's easier to use, but it actually also offers us some additional advantages. So one thing is, the user doesn't decide anymore what should happen in what order, but rather we can decide that. So we can analyze the data and the schema and the mapping, and then determine what an optimal execution plan is. And the other thing is, the mapping is actually independent of the concrete data format. So we usually really work on the level of the conceptual model. So you can apply, for example, the same mapping that you used for a shapefile to use on a, let's say, on a database table from a Postgres database. Yeah, and when I say the performance is a major differentiator, especially when working with complex schema, so this is one of these cases, both you can guess what the other software might have been. This was actually done by a customer, these evaluations. So they had done, they had gotten the workbench projects for the F-star-star software from Avento in Germany and tried the mappings that we created for them. And the difference is really, especially when you need to build complex structures, HAL is significantly faster because that's what it was built for. We kept it more or less in mind from the beginning on that we need to create these 10 levels in deep nestings and so on. So basically, yeah, that works quite well. So in some cases we have a performance difference of a factor of 200. Yeah, so I thought it would be a good idea to not just tell but also show a bit. So let me switch to HAL. All right, you can still hear me? Okay. Yeah, so that's principally what the interface looks like. We have the schema explorer, the idea behind the schema explorer is that whatever structures I have here, they can be very deep in some cases, it's always broken down to a tree. So even when there are loops and things like that, in the end it's a tree. And yes, there are places where you can go to very, very, very, very deep levels in this tree. And then the actual approach is always to just pick an element or multiple ones if you need multiple inputs from the source side, like here it's this feature ID and pick an element on the target side, like the local ID that's missing. That's also why the validation is complaining and to apply a function. And here, one of the things that HAL also does is it will not always throw all its functions at you, but rather it will tell you we think that these functions are probably the ones that might work. A very generic one is rename. So rename is the function that tries to do everything automatically, like structure matching, conversion of dates or formats and so on. So it's always a good idea to try that one first. And only if something doesn't work, then we actually look into the more complicated functions. So transformation was executed, validation was executed and now I directly get feedback here. Uh-huh, okay, it looks like the dataset is valid now. Uh, if that's not enough, I can also really have a look at the data directly. So here we have a source dataset and the target, transform data. We can see for example, what the resulting structure is. So here for example, this river object didn't have a name, but this one had one. And that's directly represented down here. If you look at the geographical name structure of inspire, you can see how there in the text of the spelling of name, it actually appears. Yeah, of course it's geographical data, so it makes sense to also look at this kind of data in a map. And this is a perspective that's especially useful if you have distinct styling for both your source and data, target dataset, so that you can directly see for example, if all the classifications are picked up correctly. And, um, yeah, so let's see if the network actually does something. Maybe not no tiles coming. Yeah, if you have your own maps, of course, you can also say I want to use a custom time map or a WMS in the background if you want to. That's entirely left up to the requirements you have. What also works is for example, to have a look at the alignment itself. So you probably saw in the default perspective that we had here, so normally the view is that you work on the schema. But sometimes you really also want to know, okay, what do I actually have in terms of the alignments. And you can see for example here, okay, there's a function that connects the width to the geometry. Let's have a look at that parameter. What does it actually do? Here we say, okay, it's actually a mathematical expression that tells the system how much to buffer. And I realize, oh, I accidentally did the foot conversion the wrong way around. So let's change that. Let it run again. You can also work with, yeah, let's say, small to medium datasets. Now we see it's much fatter than before. So that's more or less the nice thing I think about it is that you make a change somewhere in the mapping. You get direct feedback on various levels in these table views and the map via the validation. Usually if you pick your sample data somewhat right, you can keep the response time below one or two seconds and really get good progress on that. And before one of the things that I mentioned was this HTML documentation, something I've got here. So this alignment that you create in Hale is something that you can export in many different ways. So one thing we heard about yesterday is you can actually export it as an app schema configuration for GeoServer, or as a matching table in Excel, or as a HTML interactive document, and so on. There's even an XSLT exporter. So if you somehow don't trust the transformation engine that we've built but rather would do XSLT, then please go ahead. But be aware that some of the spatial functions obviously are not available in XSLT. And this interactive documentation, that's something that for example people can use to actually review the mapping. So they can go through this and think, okay, that makes sense. And what was the cadastral parcel again? They get some information on that. They can really step through this. In some cases we also have to use scripts. So not everything can always be done by inbuilt functions, but one of the things that Hale also affords is the ability to define your own custom functions that you can then use everywhere. Yeah, and that's something you can entirely review using this kind of documentation. All right. Yeah, after this very quick view of what Hale can do, going quickly back to what's up next. So in the abstract, I also promised that I would explain a little bit about what our next plans are. And yeah, for the current release, that's 3.0. Originally we had scheduled it for last week obviously because it would have been very nice to have one before this conference, but now it's going to be next week. The main thing was really improvements to these custom functions. So basically where you decide that the 70 also inbuilt functions that Hale has are not for you and you would rather have a little script that does something in addition, then we've made that much easier now and also better reusable and so on. You've seen the interactive mapping documentation, but there's also more. So for example, we decided to, before we always had one reference map. Originally it was open street map, but we always hit the heavy usage limit. So not so good enough for an idea. Then we used another one that was now discontinued and now you can basically pick your own. With almost every release, we also add one or two formats. So this time MSXS was sponsored by a customer, so we added that. And something that might be interesting for the developers is Hale was originally an OSGI application, but now all its components are available as normal Java libraries. So you can really pick anything that you like. Let's say you need to work with schema related things, pick the schema libraries, or you need some part of the transformation engine just grab it and use it for whatever you need. We also added quite a lot of generators. So for example, for generating Hale projects based on a set of parameters that's used, for example, in the JRC interactive data specifications toolkit. I had to get the words right, sorry. But we also use it internally a lot. Yeah, and there's all kinds of APIs. So if you think, okay, it's a desktop application, it doesn't end there, you can use it in all kinds of ways in your server environment as well. There's a REST interface, there are command line interfaces and so on. We've also made quite a few scripts that make it easier to run it in such contexts. And yeah, the rest is more or less than left to you what exactly you need. End of November is going to be the next release. So for that, I currently assume that MSSQL servers going to get in and the bigger functional change are going to be aspect mappings. So aspect mappings, you probably saw these that I picked an element on the schema, on the source and on the target side. And however, sometimes we have many, many, many tables, for example, from a database that don't have an inheritance structure. So I can't pick something higher up in the hierarchy and basically repeat the same mapping many times. And an aspect mapping allows me to just do that once and say, match by name or match by property type or match by name space or any of these. I can basically tell it how lenient it should be so that I don't have to do the same mapping 80 times anymore just because I have 80 tables. And the other thing that we're doing is also we want to make it easier for our users to share both the transformation projects that they do and the custom functions. So we're offering, in addition to the desktop server environment, a cloud environment that allows you to do exactly that. And there will be a couple of additional functionalities like modeling tools. So one of the continuous requests we've had on Hale was, yeah, I would so much like to just click on any type in the schema explorer and create a subtype, for example. And we've in the end decided to make that because it's not connected just to transformation. It's more of an independent thing if you want to model. So we've decided to put that on the online platform as well. Yeah. And other than that, I can just say we're always looking for smart and motivated people. So join the team. And if you need any information, I hope these URLs are more or less complete. So there is also going to be HaleStudio.org and maybe Monday or something like that. Unfortunately, we didn't get that complete before the week anymore. Okay. Thank you very much. Sure. There are questions. I guess we have questions. Hi. I missed the start talk. So maybe it's not a question. How does this compare to FME? Being open source. Yeah. Well, I think there's three main differences. So FME is more general purpose. So you have like 400 or so formats by now and Hale has less. So it's close to 40. But honestly, you might not need all of those formats anymore. If you do, well, okay. We've been adding ones on request whenever necessary. The other thing is it's really a usability thing. So in FME, if I go from one complex schema to another one, I'm going to generate a gigantic workbench with lots of feature mergers and joiners and whatnot. And that doesn't necessarily work well anymore. I will have a really hard time debugging it and I will have a really hard time running it at some point. So I've personally had many cases where I was unable to process even a couple of hundred megabytes of data if the workbench became sufficiently complex. And in this case, what we've done is really build a software that's specifically targeted at working with complex data models. So the technological approach is quite a different one and I think the usability too. And I had one slide up with a performance comparison, especially for these cases of relatively large data sets and let's say complex data models, the performance difference is usually between 10 and 200 as a factor. More questions? I have a question concerning the app schema for GeoServer. How bad is it? Well, obviously it depends on two things actually. So the two limitations are on the one hand what app schema can do, because by far not everything I can do in a GML application schema. I can also have supported an app schema. We've been working with Ben Karadok-Davison getting a couple of these fixes out and there's also more coming. So I'm optimistic that we'll get, at least for the concrete requirements we have right now, and inspire in a couple of other areas I think we'll get there. It's not that many things that are missing. Then on the app schema export the second thing is that people maybe created a normal hail mapping and their expectation is that everything they did there automatically also works in GeoServer, which unfortunately is not the case and will also not be the case. So there is one thing in hail which is called the compatibility mode. It's indicated normally, somehow I can't see it right now. It's not funny. Compatibility mode. So there's CST, GeoServer and XSLT here and each of those has slightly different, well in some cases quite different sets of things that they support. So if I switch here for example to GeoServer now, they will most likely give me a warning about stuff that's not supported or not. Yeah, latest build from one hour ago, usual. But normally it should give you here, there were the red dots. I don't know why the icon is small, but the red dot actually tells you that it's now not working anymore. It should look different. Do people understand what the consequence of this is? This is a major development. Previously, George can tell you, he was writing these conversion scripts by hand to provide the soil amount to inspire. And now he can do it in official editor and have inspired support in GeoServer in minutes more or less, hours instead of weeks. Yeah. Can you pull your existing app schema? Pardon? Can you pull your existing app schema stuff from GeoServer into LStudio? No. At this point it's just one way, sorry. Interesting idea. But you can import the XSD and it will help you build it again. Yeah, that would be possible. Yeah. What about a compatible mode to degree? Yeah, it's actually, it's one of these things that's been on the wish list for like a year at least. So at least since the GeoServer app schema thing came up, there was the idea to also do that for degree. I have to say that in concrete terms it's kind of a problem of a lack of funding for that particular development. Because it's not done in a day, as you can imagine. And we had briefly evaluated whether we could do it in the code sprints around this conference. But both from the degree side and from us it was, yeah, ruled out already like two months ago and we thought, yeah, it's probably going to be too much effort to be realistically done in a couple of days. So yes, possible and also something that I would like to do, but it's not in the priority list right now. Did you make a note? Toss it? We are looking for funding. Yeah. That's all I said. Yeah, yeah. More questions? Thank you Thorsten for this great work. That was amazing.
hale studio is an open source environment for the analysis, transformation and publication of complex, structured data. We're developing hale studio since 2009 and have reached more than 5.000 downloads per year. Most of our users employ it to easily create INSPIRE data, CityGML models, or to fulfill e-Reporting duties. Some use it with BIM data, health data or even E-Commerce information. In the last year, hale studio has gained a number of headline features and improvements, such as integration with GeoServer app-schema and deegree's transactional WFS. We have also added support for more open formats, such as SQLite and SpatialLite, but also for enterprise formats such as Oracle Spatial and Esri Geodatabases. In this talk, we will provide a quick introduction to the declarative real-time transformation workflow that hale studio affords, highlight the latest developments and provide an outlook on the roadmap for 2016 and 2017. We will also highlight some of the most interesting projects our users are doing. Thorsten Reitz (wetransform GmbH)
10.5446/20283 (DOI)
It's also not another technical presentation. So I'm working to lose at the French Space Agency. I'm working mostly on the development of an open source library called Orpheo Toolbox. And the goal of my presentation is not to present a TB. I've got only one slide. I've got a presentation Friday morning about the TB. But about the status of the incubation process for OTB in OSU. And Orpheo allows us to make some improvements in the way that the governance and the project is working globally. So Orpheo Toolbox is an image processing library. It is, how can I say it, it's a library using all the other library libraries like Goodall, Osim and it was funded in development. As a byproduct of the development of a satellite called PlayAD, which is an optical satellite, which is able to produce submetric images. It was launched in 2010 now. So since 2006, CNES is developing the software which aims at giving tools, algorithms to help people extract useful information from images. It's used at CNES, of course, but also now in other agencies like the European Space Agency, which uses OTB as one of the libraries in the ground segment to produce the Sentinel-2 image we saw this morning. And it's written in C++ on top of the inside toolkit library, which is also an image processing library, but dedicated to medical images. It's using lots of other libraries, Goodall, Osim, OpenCV, and to give you an overview, we saw this morning that all those raw data is now available with the Landsat, HATES, Sentinel-2. The goal of OTB is being able for users to go from this raw data to a classification map, combining algorithms, external data, etc. So the topic of my presentation is to give you an overview of the OTB-OSGO incubation process and to describe how it allows us to change the way we worked and how decisions are taken now in the project. Explain how the project steering committee is working now that we have set up one year ago, and to sketch the possibilities offered by these more open governs. So the OSGO incubation, OTB is not a really famous project in OSGO because it's quite specific in terms of data we are working on. It's a library, you need to combine algorithms, it's also quite specific as the guy from the Open Commission said this morning, the users are public scientists or public administrations, so it's quite far now niche, I would say. But we've got users and we applied for the OSGO incubation in 2011, we start again in fact the incubation process in 2013, we find a mentor, I want to thank him, London Lake, and we start to complete the checklist. It's a long process, which is sometimes too long discussions, but as you will see it's a hello us to really improve the way we structure the project. So here's the checklist, OTB has always been an open source project since the beginning of 2006. Everything was public, there was a mailing list, etc. etc. So the first point was the infrastructure transitions, the source access was okay since the initial of the project. The way that the community was functioning, it was not existing, it was mostly managed by people like Ness. Ness was funding the library and there were people like Ness like me, which were driving the library and the decisions and I will focus on this point, but now we've got this PSC. And we are now at the last point of the OSGO incubation checklist, which concerns the code copyright review, but you will see that OTB also since the beginning follow, I wouldn't say, good habits regarding the way we are. Keeping track of copyrights, we are keeping headers on our files, etc. etc. We don't have for now a Committer Responsibilities Guidelines, such things, it's something which is in progress, I will come back to this. And after this we will be good, I think, to become a member, I hope of OSGO. So how was the decision making process before we create the PSC? As I said, it was the benevolent dictatorship dynasty who made future requests. Everything was decided, in fact, at the French Space Agency, Ness. Everything was open, of course, everything was open source, but the future requests were made by persons involved in Ophio projects, so colleagues at Ness, or users we are working with in France. It was people inside the Ness, and also users from the main Ness, of course, but who decides at the end, at the beginning, it was the Ness team. With the support also of a company, CS Communication and System, which is a developer, a contractor of the Ness, which is developing OTB since the beginning, and also developing activities around OTB. And who actually racked the code, it was CS developers, but also people at Ness, which are involved also in the development and are not only making contracts to make the development. So why we decided to change? It was not only because of the OSGO occupation, it was really a matter of transparency, because users were, of course, are often informed afterwards of major changes. We tried to get people involved, but there was no some kind of rules to follow to add new features, big or small features in OTB. There was no insight or motivations of what we are doing this or that. It was, of course, difficult to participate in decision making because there was not really process. And OTB become a big project, so with lots of algorithms, lots of applications, and there are lots of ways to contribute to the project. And it was, there was more and more people who want to get involved, but it was more and more complicated to follow what's going on in the projects. So we want to do his contributions at that time. There was no procedure. We've got external contributors at that time, but we don't have really a process to get this contribution. And of course, there was also, as we discussed in other presentation, a question of sustainability, because CNES is funding most part of the development of OTB. But what if one day CNES stops funding it? And we wanted to get new actors more involved in the project. So it's like two years ago. So we create this PSC. This project steering committee is mostly, we take the experience of existing PSC. We've got in quantum GIS in Goodall and other OSGO project. And we didn't reinvent the wheel. It's the same with the code. We don't want to do this. And we didn't do this also for the project steering committee. So we have now this structure. And what is cool now is that more one year and a half at the beginning, there was only people from CNES in this PSC because we bootstrapped the steering committee. But people outside CNES from CSB, which is a laboratory scientist, which is doing research using OTB. He's now a member of the PSC. There are some people from IHRSTA, which is a public administration, and also people from CS, which is the contractor, which is making development of OTB and CNES and for other companies. So how this PSC is working, the PSC is the workflow is highly coupled with the way we are now doing and making decisions and adding new features in the library. OK, people in this PSC are developers or people who know where the library and follow the process on how we can have the new features in the library. So they are deciding the workflow, the testing procedure, how we will package, which, when we are going to do the release, etc. We took also the experience of Goudal, I think, is using this kind of request for changes procedure. When you first describe, it's called a request for comments to start, so you've got an ID. You go to the OTB Wiki and you don't have code yet, but you've got an ID, you describe it and you can have some comments and start a discussion about a new feature you want to add to OTB on the main list. Then happen the development. OK, and you will propose a request for changes for OTB, so it's another description with another template and we start a procedure, a process of review of these features, a public review and a vote. OK, and then it's merged in the library. And we've got also now the fact that we've got a release manager, which is managing the way that we are introducing new features in the library and when we need to do this. And it requires the approval of one person, which is nominate for every release. So that is a short reminder of what is the process now of our decisions and how development are done now in OTB. So of course it has been some impact on the way we were developing new features. So we moved to Git. We were using Mercurial since the beginning of OTB, a subversion at the initial start of the project, then Mercurial. And we moved to a Git workflow to be able to transfer this ID of requests for comments and requests for changes in a notion of branches. So it's a little bit complicated, but it's working like this in lots of projects. You start for the developing branch, which will start at the initial of the release. You are doing some feature branches, those feature branches which correspond to a request for changes. And it will be approved and then merged. And then we'll start a new branch to do the official release and etc. etc. etc. So it does not exist before in OTB. And the way that we are declaring this new process makes us change the way we are also developing and managing the code in OTB. So the release manager accepts requests for merges and then corresponds to a PSC which approved this request. So every release is now planned every three months. That's something also we changed because before for users they didn't have a clear idea of when will be the next OTB version. Now it's every three months and they've got a clear idea of what they will find in the new release. And after one to two weeks we will release one of the releases. And it's working pretty well. That's also something I want to point out now. The way that we have set up this description of new features. So it seems for developers a little bit at the beginning it's a little bit complicated. You need to go to the wiki describe that you want to do exactly. You need to follow some procedures and template. But it really helps us to to constraint and allow users to have a clear idea of what's going on and why we are doing such things. So we've got request for changes not only for CNES now. That's also a point I will come back to this. We've got new contributors. We follow now this procedure to be able to get some new features merge in the library. For users so every significance as I said ongoing or past changes is not public and visible. Users have the opportunity to comment pending request for changes. So everything is it was always public but also those discussions happened on the OTB mailing list. What I want to say also is that users can also file new requests, comment them and for them releases are more frequent. And it's a really a good thing for them. For contributors so there is a clear and detailed process as I said how to get the code in OTB. Contributors are guaranteed to be treated with equity. I think it was the case before but now it's stated we've got some rules were following those rules and everybody knows about them. And they know the deadline when they want to get a feature inside OTB. For developers of course following the procedure there are some pro and cons but there are no more silent code of hundreds of modifications of files that could happen in OTB. It's not possible anymore. It's impossible to do this now. Of course there are some adjustments needed. We accept comments from review. We're getting the code merge. It takes more time of course to be able to develop a feature. For small features you can ask do we really need to make a request for changes and follow this procedure which is quite light. But for really tiny modifications it's a little bit complicated. And I will go a little bit faster. So pros on the overall I think that we would that that's really good. What we have done here where we have more code reviews we have more contributors. The changes we made are more consistent because we are making one feature then everything goes to one branch and is described. Everyone can get some give his opinion even people which are not on the PSC. OK. And new features are more visible I think. OK. The project steering committee for OTB is still small. OK. There was initially lots of people of CNES which was not a good thing. We want to get people involved but it's working. We've got now more and more people from from other organizations in it. Of course it does not. It does not solve the point we were discussing before about funding. OK. It does not make possible if the funding is still mostly are mostly still coming from CNES. But we hope that making this organization will will make it possible to to get new organization. Make some funding and do request on their own without without having to to to get some help from from the CNES. OK. We are we can improve the way we are reviewing code is not so important. I will skip this and we still have technical problems with with those procedure. It does not solve bugs and etc. I will skip this voting part and this also. We are doing PSC meetings. OK. Also through IRC after after each release. And we are also publishing the log in the minutes of the of those meetings. And and it's the place where really important questions are discussed. It's still a young organization. It's of course a tool that we can adapt. And and also the process can be can be can be discussed and can be changed. So I will go to the code review. But I don't think I've got a lot of time. But the last point of the OSU incubation process for OTB concern the the code review. Also for OTB the the way what we we've got contributions and we were using external libraries was done I think it's a pretty good way and right way since the beginning. All the dependencies are licensed in Apache or MIT's license. The the contributions are licensed in the CC license. It's a French translation of the GPL license. OK. It's compatible with GPL and also license which is I don't know. I would say this the open to the open the OZ organization. I've put a stamp of this license saying that it's open and open source license. We don't have yet what's called a contributor agreement. But we want to do this. We have I think a good good management of the copyrights etc. Also with the help of the packages like Debian as OTB is now official package of the Debian GIS repository. It helps us a lot to correct small and make some improvements on this part. And we've got also in the frame of this code provenance review a discussion initiated at the CNES to move also the OTB license which is a GPL like license to an Apache license. That's something which is ongoing and that's why we didn't finish the OSU incubation for now. I would like to finish and to decide on the OTB side if we move to an Apache before completing the incubation because it would be a big change. So I will skip the this code review. So in the frame of the Azure incubation and also in the frame of this discussion to move for to GPL to Apache we have done a code review of OTB which is now completed. And we have started the process to contact the contributors of OTB to make them sign and give and get them get their agreement to change the license. And I've got considerations about changing and relicensing the project and it's not an easy part. It's a big decision and I think we will be able to complete this this year. And that's my conclusion. Thank you. So we've got time for questions. You have one minute to finish your last slide. Yes I can finish. So what I said okay I want what I want to say is that I'm not sure that being an official project for OTB and an official Azure software will change a lot of things especially for CNES. But I think that the process and the incubation even if it's a long process, long discussion really help us to make the project more open and the community more involved. Even if the project was open since the beginning everything was open the mailing list etc. And I think we were doing this in a pretty good way. The setting this PSC and following all those rules that have been set up in another project and try to adapt to the context of OTB really help us to make a more open project. That's it and I hope to be able to to complete the change of license and then being able to be an Azure software hopefully this year. You can ask me questions. We've got different type of users for OTB. You've got users on okay agencies which are on the ground segment using libraries as one of the components of a big project for them. I'm not sure they will see the difference. I hope it will help them to take the time to make contributions, more contributions to the library. At the complete opposite we've got end user, okay, thematic user, scientists which use graphical interface based on OTB and also they saw new features. But for them there are still the websites, the mailing list. I don't think they follow all those discussions but I don't think it's changed. But in the middle you've got contributors, scientists also but doing image processing or organization like IRSTA which are not a member of the PSC which are using OTB as one of their components which is spreading their organization. So it's one of the key components for them. I think it has changed the way and it's helped them to say okay we can be more involved in this project. We can participate to the discussion and perhaps it has changed perhaps the decision choosing between one tool to another one to make some developments. I hope it helps them to make the decision to take OTB. So for public administration, some administration in France, I see the difference. So to for instance remove proprietary software and move parts of their process with OTB. It was the same timing as I said as we changed the policy on the on the governance. Yes. Okay I didn't get it. It's a good question. First the license was chosen in 2006. The CNES was not really used to do open source at that time. So they choose GPL considering I don't know really. It's perhaps also with the ID perhaps to do dual licensing. We have discussed this I think in the first presentation. So they are different way to to make to valorize your open source project. CNES does not want to to sell software. It does not matter for them but they want to to create an ecosystem where other companies in France or in other country use some satellite images but also some software and develop some business or activities around OTB. There are some companies which use OTB and develop activities at the Upran Space Agency for instance but they they have since the beginning said that a less permissive license less Apache will allow them to to to do some proprietary software of course with OTB and it will facilitate in their perspective the development of of activities and business with OTB. So it's more a consideration of how CNES can help companies in the in around them around CNES to develop activities with OTB. I don't think that it will change also a lot things for CNES for lots of users. They don't really care about the license for most of them. Thank you.
One year ago, in the frame of the OSGeo incubation process, OTB team decides to initiate a Project Steering Committee to formalize the way that decisions are taken. It was largely inspired by existing governance in other OSGeo projects related to OTB like GDAL, Quantum GIS or GRASS. This initiative aims in encouraging people and organization to join the effort and participate more actively in the evolution and the decision process of the library. Most people well understand this approach and join the effort to provide high level guidance and coordination for the ORFEO ToolBox to guarantee that OTB remains open and company neutral. This presentation will come back on the set up of this open governance, how it improves the way that the project progress, how it could evolve in the future. It will be also the occasion to interact more largely about open governance and decision making processes in free and open source projects. Manuel Grizonnet (CNES)
10.5446/20281 (DOI)
So, hello, everyone. I will talk about PostgreSQL databases and about auditing the data inside so we get to know what is happening inside. And I will make a short demo with a little program I wrote to eject those changes out of the database and present the software and the background inside the PostgreSQL database. So just some words about me. I'm a Java programmer based in Munich and I work for the Stadtwerke München, which is the local utilities company. And we're concerned with the management of all the network infrastructure like power, gas and so on, water networks. And I'm in the GS department and a solution architect for the network information system. So I'm concerned with GS software for about six years now and doing mainly programming in Java software. So now about the problem I want to talk about today. You might have, well, 10 or 20 people editing a database there on the right. And you might want to know what edits are really happening inside my database. So which features were inserted, which features were deleted? You could ask when did the feature change the last time? Or you could ask where did data change in general? So you want to know the region where the data has changed or someone has deleted data. And this might be of interest for technical purposes. So you just think you have a rendering of your data and put it into a cache. When the data changed in the database, you might want to update your cache, your render data. And this could be of interest as well for processor reasons because other people might want to review that change. Or perhaps you have to trigger a workflow because other data has to be adjusted as well. Or you have to inform different systems in your whole system landscape. So I want to describe a little application I've wrote to extract changes. Using mechanisms of the PostgreSQL database. So little architecture slides. You have some data somewhere in a PostgreSQL database. And you edit it with some, might be QGIS, but it doesn't have to. In my demo, it will be QGIS. And then we put onto it a so-called logical decoding plug-in, which is able to pull out the changes. And you can consume them via SQL. And then you can pass those changes inside this Java application. And then you have it inside your runtime of your Java application, all the change sets. And then you can start doing fancy things with it. One is you could make a HTTP call to a geovab cache instance, sending a little JSON document saying, well, here the data has changed. Please update your cache and make a receipt or truncate your cache. Or you could also inform other applications, this one you would have to write yourself because I don't know your applications. But you can write a little Java class which processes this data and does something. And the second thing my application is doing is feeding this data back into the PostgreSQL database into a metadata table, which builds a complete history of all the records that have been changed. And then you have to be careful that you don't consume that change as well. Otherwise it would go round and round. And then you could have a look into this table with a GIS program again, because I also compute the bounding box where the data has changed. And then you see on your map where data has been changing. Yeah. And other responsibilities of this little program periodically check the change sets from the PostgreSQL database. And then pass the output of a special plug-in inside the PostgreSQL database. So pass the real change, compute a bounding box. And then for every change, draw in every change table, it will create an audit record on the audit table containing all the old and all the new values. So you see the whole state of the feature be formed after the change. And then you can also do a HTTP call to a geowebcache server to truncate or reseed the affected region. And this piece of software is highly configurable. You can switch parts of it on or off. And you can define the polling interval and which database schemes you want to watch and so on. So that it could be helpful for many people, hopefully. What do I use as a technical stack? This Java and the runtime container is Spring Boot, which is a very cool library for getting productive very soon and very fast because you don't have to write much of boilerplate code. So you just write the application code and the rest is nearly done for you. For syntax parsing, I use antlr for the parsing of the output plug-in format. And then geometry processing is JTS and other common libraries you find if you do Java programming. This application is, I'd say, Cloud-Ready. It's nearly stateless. And you can inject all the configuration from outside from a simple configuration file. And I gave it a MIT license so you can do what you like with it. And I shared it on GitHub. So have a look and try it out if you like. If you want to run it, you need to write an application.properties file which contains your configuration saying where's your database and what do you want to do. And so on. And then you run it just saying Java, Java and then the Java file. And we will do that now. I'll give you a little demo how this is going to work. We are here at Porn and I got a little QBIS project. And I want to have a little, the use case is I want to inform the public about construction work that's going on in the public. So we need to replace some pipes. And perhaps the others want to know that there will be a construction site in this area. So I will add a new feature to my database and say, OK, here we are going to do something. And I put some attributes in. So it's me who edited this record, auto networks. And I give it a start in the net date. So it starts today. Oh, well, today and a week later. And I put this feature into my database. So up to now nothing special. And now I start my little program consuming changes. So I say Java.jar and run this. I have an application properties file configured correctly inside here. And if I start this one here, it will wake up and tracking changes and say it's publishing one change metadata to the check table because it just found out this change we have just been doing. And if I do a refresh now, I see a new on a changes level. I see, OK, in this region something has been added to the database. And now I could move this feature around and say, OK, I want to move it somewhere else and save again and wait some seconds. Every 10 seconds in the background, it will pull from your changes. And so we get a different region. It's updated features is now the bounding box of the two of the state before and after it has been changed. We could also do some added of properties. Where's my tribute table? It's been lost. And yeah, it's open. OK. Now if we look at the database, now in the end, I'm going to delete this feature again. So this is perhaps the most interesting part because you can't find any database features that are there anymore. So if I'm going to delete this feature, select it and delete it and save again and wait some seconds and then we will get a third record saying, OK, here something has been deleted. And if we look into our database, into this metadata table, I will show that to you as well. So from this one, you see three entries in this table because we have done three edits. We have been editing, inserting something, updating and deleting it. And then you have some attributes to the region. We've already been seen. And we also get some more data which is the name of the table, which is the name of the scheme, which is the transaction ID, which is the timestamp when the data has been changed. And you see also a whole JSON structure containing the whole record before and the state before it was changed and after it has been changed. So if you know SQL good and if you know PostgreSQL good, you can also make queries into this JSON structure to extract the changes you're interested in too. So this is for the application. And now I will go back to my slides and describe a bit how this works in the background. So to understand that, you need to understand what is logical decoding. And this is a feature which is based on the replication capabilities of the PostgreSQL server. So for servers, a miner which were released with 9.3 earlier, you could do physical or binary replication of data. So those right ahead log entries are binary and could be shipped to a different server to have it in sync. But you had no chance to understand what is really inside this ship, this wall file you will ship to the other server. And the logical decoding feature which was inserted into PostgreSQL 9.4 now is able to decode this record back into the application level. So you really see which rows and which tables are inserted, updated or deleted. And this is what I use in the background. And if you want to set it up for yourself, you need to know some more details. So you need to know what is a replication slot. This is a point where you can consume changes from. You tell the database server, I'm here and I want to consume changes in the future. And that this replication slot keeps track of this date when you visited last time and all those things. And it has a name. And you can consume a change exactly once from such a replication slot. The data is stored in the x-log directory so where all those writer-headlock files are there. And be sure that you consume it because otherwise this writer-headlock file won't get away and you load your hard disk. And the other thing you need is a so-called logical decoding output plug-in which defines the data format of your change set. So this has to be written in C inside. So it's a plug-in inside your database and luckily the developers did do a test decoding output plug-in which is a userable, human readable text format and I used that to pass the output because I didn't want to write a C plug-in and I didn't want to threaten the stability of my PostgreSQL service. So if you have all those two things in place, you can put it together. So you need to do some configuration on your PostgreSQL server. So you just need to set the writer-headlock level to logical and set up some that you are allowed to do some replication slots. And then you can curate one. Create logical replication slot. You give the name of the slot and you give the name of the output plug-in. So the test decoding. And the other thing you need to do in my use case is that you have to say which columns I used to identify a feature. So I say I want it with all columns because I want to have the full state of the feature before and after the change. And if you have that, you can start using it. So doing, you could manipulate your data doing edits and after that you can really starting fetching changes with this command SQL command slot get changes and you give the name of the slot and then you get such a structure, you get the begin and the commit numbers and you get such a record where you can pass out all the relevant data. And if you're done, don't forget to disable your replication slot otherwise your data won't be discarded. So this is for the consuming part. I want to talk a bit about related work on this topic. So there are other people thinking about version control and so on. So there's the GeoGeek project going on. There are talks here I think on Friday morning. So they have approached for disconnected and distributed and branching, merging and all those things. So it's a bit more, but it's also more complex. And if you think more in the direction of fine-grained replication of data, you should better have a look into Pgeological, which is a program released by Second Quadrant, company doing PostgreSQL consulting services. And this focuses more on how to get the data from one database to another. But they also say they want to be open for other programs to consume those changes and they want to do some JSON format to fetch. And if you look into GitHub and do some searching, you see other people are writing output plugins for logical decoding for their needs. I decided a different way in order not to threaten my server to put something in. I don't really know. Yeah. So to summarize, we have logical decoding as a way to produce an event stream out of your database, describing what edits are really happening inside your database. And the logical decoding dot jar is able to process this data stream and can write an audit log back into the database. It's aware of the geo aspect, so it can parse geometries and do something sensible with it and can also trigger cache invalidation in the geo web cache server. And if you're interested, have a look at GitHub and check it out. There's also binary release, which is already built. So it's easy to try it out. There's a little tutorial for this example I gave here. So I'm interested on your comments on this. And I also gave a reference to the background documentation from PostgreSQL for using this feature. So I think it's quite interesting and could solve some of the problems we have every day. And yeah, I'm interested in your comments, and I'm open to questions. Thank you. Yes, so we are quite early, so we have about 10 minutes for questions. So go ahead and shoot. Yes. Hi. I like how you keep track of all the changes and be able to start off a workflow with those changes. But at first, I thought, well, you could set up something similar with just triggers on insert and on delete and on update. Would you comment on how your solution is different or better than those things? The difference is that you don't have any, you don't have, the application doesn't have to be aware of those background checking. So you push it back into the back of the database. You don't have to install triggers. Imagine you don't have the right to do those. So it's possible to do it with different techniques as well. It's right. Normally, this would be built into the application itself. This is the solution I know from other information systems. But in this case, if you don't have control of it, you can do it anyhow. Do you think this approach will be possible to use for an open-sread map tile server cache expiry? I think so, yes. I got one hint from second quadrant saying, keep care of your performance if you're doing logical decoding. So that would have to be tested in advance because there are many small edits coming in. At the moment, I think of rather system integration, which I want to do with it, saying, okay, I know that in a batch job at the night, some data will come in and I have to play it to a different system. Open-sread map is quite busy. Perhaps you have to test if it will really work. But the workflow could do it, yes. An advantage would be that you could do this stuff in times where the database isn't that busy. That's right. Yes, you can wait for half a day and then consume all those changes. It will wait for you until you check it out. It's actually a question I had myself. How long is this stored? It's there forever or until you pick it up? It is stored as long as you don't consume it. And you have to make sure that you consume it. Otherwise, your X-Log directory will never get empty. So monitor your replication slots. How far they are behind? More questions? No, it's okay. Just a quick question on users. I didn't see users in your address. I would expect this in an auditing tool. Is it an option? No, unfortunately not because the database doesn't store it. But in terms of this logical replication, you only see committed changes. You don't see which database user did commit the change. On that level, you have no chance to get it. So if you want it, you have to pull it into the application that it will write the current user onto the record and then you have it. It has to be an attribute in the record. But you can't pull it out of the metadata because this logical decoding or this replication thing doesn't just store the data that will go onto disk. And PostgreSQL doesn't save who committed this data by itself. So you need to put it into the application and then it will be in the record. Any more questions? No? Okay. Then I think we can thank the speaker again. Thank you. Go users.
Have you ever been wondering what edits are happening inside your databases? Logcial Decoding, introduced in PostgreSQL 9.4, allows to keep track of changes commited to the database. This talk presents how this mechanism can be used to audit PostGIS/PostgreSQL databases. After an introduction to the concepts of logical decoding, two use cases are presented: Quality Assurance: writing an audit log into the database after each commit so that someone else can do a review of the modified data. Cache Invalildation: refreshing a GeoWebCache instance at the regions in which the data has changed after each commit. To support these two use cases, a little Java program able to be run as a mircoservice was developed and will be shared under an open source license with the community via github.
10.5446/20278 (DOI)
Okay, good morning everyone. Today we start this session block with Peter Neubauer and he's from Mapillary and he will show us what Mapillary is and how you can use it. So thanks. Yeah, good morning with a super nice view here. My name is Peter Neubauer, it's really awesome to be here and look at the Bundestag and everything. It's fantastic. So I wanted to talk to you about what we do at Mapillary and how you can use this data to improve maps to get data out of that platform, which is probably the main objective of Phosphogy and what projects that we contribute to the world so that you can use there. So we are basically a service that crowdsources a street level imagery from any device basically. There's panoramic devices, consumer grade cameras, mobile phones, whatever. We then make the use computer vision to generate more data from that and we make that available via APIs and so on. We also give the imagery back under Creative Commons share alike, of course pixelated, like the minimum requirements for having privacy contained and then we also integrate to open street map and to other mapping alternatives where you can then get that data and derive from it new data that then goes into open street map, for instance. We can't really do automatic edits because that's not what these initiatives are about, but we can give suggestions and underlying data so others can then point out like this is a traffic sign or this is a new way or whatever that is. To date, yesterday actually we started to cross the 80 million photos line. Yes, that's at least as I know it, that is more now than Panramio, which was the greatest photo collective of all time so far. So we kind of passed it yesterday. There's a lot of mapping going on all over the world, especially in Europe as always and in the US, but also India and Africa and South America are pretty active now. So also like the Red Cross and others are using Mapillary to map like catastrophic areas to get like a before and after view and like monitor climate changes and that kind of stuff. Since it's really not stitching images in the background, but a database that you can query, you can actually make like timelines and you can filter for users and for times of day and for color gradient or whatever you want. Looks a bit like this right now. This is San Francisco. This is a town very close to where I live, Helsingborg. This is actually the municipality sharing a professional street view data that they have acquired from people that drive around with measuring wagons. So this data is very, very good and it intersects then with users data that is a bit more shitty. Devices vary. Many people use the phone apps because you have everything in one. What we need is a GPS, the image direction. It can be inferred if you say the camera was pointing in the track of the direction or with the direction of the track or it has an offset of say 90 degrees. So it was pointing to the right and so on. And then of course professional rigs that do that. From that we use the focal length of the cameras and the GPS data to calibrate the model that comes from that and then be able to intersect these images. You can build quite a lot of interesting rigs. This is actually the Theta S Ricoh camera. That's a 360 camera and then he has the mobile phone there because these consumer grade cameras don't have the best resolution yet. They're very convenient but they have like 5000 pixels spaced out over 360 degrees which gives you kind of pixelated images if things are like far away. So we probably need some more resolution coming there. Behind the scenes, the first thing we do when the images come in is to try to detect faces and license plates via right now static detectors. We are working on self-learning like deep nets to do this and we blur them. We hard blur them in the image. Other things we are detecting and we are showing them or blurring them on the fly in the viewers and in what you get out. We don't want to destroy original data but in this case we are actually doing it in the first thumbnails that we generate. We generate thumbnails in four different sizes to minimize traffic. It's on Amazon and then depending on what you want you can pull them down up to 2048 big. Otherwise, many sizes won't fit there. We then do 3D reconstruction from this. If you look at this, it's actually not an alpha blending. This is actually the different textures, the parts of the textures blending into each other. So this is what it looks behind the scenes. There's a point cloud going. I can actually show you from yesterday. These are the camera frames that went here and these are the points that are kind of reconstructed from the overlaps of different images and from the calibration. So what we are building really is a global sparse point cloud on the world that is textured at the same time. We are now starting to investigate how to import lighter data which is like dense point clouds and then you would get of course the ability to texture dense point clouds which is super interesting. So this is calibrated so depending on how good the incoming data is you can actually measure in these point clouds. So you can find out like how far is it between this and this which is interesting for more than a palaties. They can measure like roads and tunnels and that kind of stuff. I can actually show you one of these examples. Yesterday I went to just around here instead of going to the party and if you look at the I would just reload it so it gets the right. This is the a bit older viewer but the good thing is it has the point cloud viewing. So this is one sequence, right? So you can see here the actual building and the trees there, you see the trees there being reconstructed and you can actually walk through it here, right? So this is not just a video, this is live on the side like all the time. Just so you know when the next person comes then via overlaps the point cloud will get enriched and brighter you saw the building was kind of smeared out, the big building because we only had one perspective. There's no depth perception but as soon as someone else comes and comes a bit from the side then this will be adjusted to form the actual building, right? So right now we only have one perspective. So let me see. So what we are doing now is if the image is permit we're starting to use deep learning to detect objects in these images and these scenes and since we have the depth information these are not just detections, this is detections but from these detections via interpolation and so on. We can partly stabilize the point cloud, we want to take out things that are volatile from the point cloud. For instance we don't want to match on sky. Sky is segmented here, the blue thing is sky and has a very good accuracy in deep nets, it's very easy to learn and it's very high value for us to take that out because it gives false overlaps, volatile objects. Same thing for cars, for people, for other moving objects that we can recognize, we want to take them out from the point cloud matching so the point cloud gets more stable. And also of course there's other objects that we want to detect like street signs like vegetation, park benches, buildings, what not. Right now the point cloud is a bit sparse but it is sufficient for street signs. If you can match like a traffic light street signs lamp posts, you see the lamp here, exactly matched. You can then, if you see it in two, three images, you can interpolate it and put it on the map where it is as opposed to in the image plane. And that then gives, we call that object merging, so there's the phase of detection of things and then the merging phase where you merge different detections into one object or one three-dimensional thing. So this is what we're doing right now. We have built the back end, I'm rebuilding it right now because it explodes the database. We have done 20 million images right now with one detector and there's about 100 shapes for that detector per image, which means 20 million images, 200 million shapes, 2 billion shapes and so on. So we need to come up with a better storage mechanism for that. But that's big data problem that's where I come from. So this is how it looks in a video. We do that for images, also people can upload videos and then this, of course, is much more affected because you can do object tracking and edge tracking and so on. You see the confidence intervals for vegetation, for cars and so on. Okay, let's see if I can. So from this then, this is kind of like a more, more like an example of what a municipality wants to do. They want to recognize certain objects and then place them into the scene and have them on the map that gives them the ability to, for instance, validate their databases or see when an object was seen first, last, if it is still there, in what condition it is because this is ground truth. So you can send students or garbage trucks around with cameras and then assess the data that you need and it doesn't need to be professional grade. It's enough that you get a hint of what is there. It's kind of good enough and it only gets better the more data there are. So we come kind of like from the low quality data range and work our way up. So this is what the newer website looks like. This is part of an open source project, Mapillary.js, where we open source the whole viewing experience. We're adding in there now the 3D placing of markers. So we know this is a farther away than this. You can actually see it in the blurring. If I, it's not exposed, like the marker API is not exposed yet, but if you go to this, for instance, and I enter blurring, I say I want to, everybody can suggest blurs and others. So this is a panoramic image, right? So if I do this, you'll see that I kind of are not marking in a rectangle but a rectangle in space. So I can even do it like this. It will actually be quite interesting. Then I would kind of blur the whole lower part of that sphere as opposed to a rectangle. So it's kind of non-trivial to do these kind of 3D things there. Okay. I don't want to blur this. Okay, so how do we make this data available? We're open sourcing as much as we can or as much as makes sense of the core code. Also like for the data, we are having basically three big outlets. One is mapller.js, which is the kind of visual part where we use these two APIs and the textures and the actual images to, as you saw, to make a kind of street view like experience, but plus plus because you can actually modify and measure and you can place markers. It's kind of a 3D leaflet, I would say. We're starting to build a 3D framework where you can place like map of shell or leaflet or anything 2D, you get a 3D framework. In the background, this is using the Mapbox Vector Tile format for almost any data. The big advantage here is that the Vector Tiles have already bounding boxing built in, so you don't have to have that as an extra parameter. Also they are highly optimized and they are styleable on the client. If you happen to like then the 2D map is consuming the same data as the 3D map because 2D maps are going against Vector Data for different reasons. Also then the Vector Data here is dynamically created and statically created. Our kind of dynamic APIs that are not just CloudFront like Protobuf Tiles that are there are also returning the same format. You can drill down to any data you want and then just add it to the map and style it. It's very, very convenient. Then there are other APIs that are special for private projects. We provide private projects for people that don't want to have their data open for construction businesses and so on that need private data. But that's then normal kind of JSON based, REST based APIs. Open source, as I said, the viewer is open source MapReload.js and we are working very, very actively on that because we are using it ourselves pretty intensely. It's also done for embedding and for easy showing off your own stuff. We are just adding now filters to it so you can actually say I only want footage from me or from these three users from this period because then the weather was best for showing up on my location or whatever you need to do. It's very good for embedding. What you saw, like the three reconstruction from images, it's called structure from motion. We have open source this too and are actively working on this. This lets you do city level 3D reconstruction from imagery and it's built on top of open CV so that's why it's called OpenSFM. And then we're opening the data as much as we can. As we are allowed, we cannot really put all the originals online because of privacy reasons but all the blurred originals we can put online and with the APIs we are giving you the metadata too. We have a special license for OpenStreetMap and for other open mapping activities that they can derive any data they want from these which is what we want to give back. Also, the image licenses are compatible with Wikipedia and with others so all this footage is usable in these activities. So yeah, that's from me. Any questions? I can show you a bit how it looks for OpenStreetMap integration but probably that's over the five minutes. Yeah, thank you very much and yeah, as you already said, we are open for questions. Hi. Is this a semantic segmentation system open source as well? Oh, yes, to a big part, it's partly part of OpenSFM and partly this is implementations of papers that are out there. So much of it is actually like research. I can get back to where we are at the stage right now, we are finding the best models for segmenting the data and some of these segmentations are done in several steps. So you first find for instance areas via color gradients and static methods and then you pipe them into another segment that does actual deep learning. So there's different stages in there and that depends on what you want to do. This stuff is no secret but we chain it in kind of like docker images. So the question is how to best open source plumbings. But yes, we recently published our findings in papers so we are actively participating in academic research on this stuff because it's so bleeding edge. The code, yeah, it's mostly Python and it's mostly academic papers right now. So look at papers from Gerhard and Peter in Graz. They are our academic kind of researchers and they will now go out and compete in the, what is it called? There's a big segmentations challenge right now. We think we are in place four according to the state of art. So watch Peter Concheater. He is regular presenting the papers and so at conferences. And as time goes we will have something that we can actually put into code. Right now it's very in flux. Other questions? First thanks for the presentation. For OSM street map, open street map contributor that want to add some attribute to the data based on mapillary database like postage science and restriction. Is that a pure manual process or are you providing tool to facilitate this? So we have been thinking about holding tagging data in our database. We decided against it because it's very hard to do it right for everybody. What we do instead is that open street map has source mapillary tags that refer to the UID. Every image has a UID and every object has a UID. So you can even in the future refer to recognized objects and say this is the base of us tagging this as a lamppost. So we would like to have links into the database not necessarily hold all the forksomenies that people come up with. I think that's one of the big problems in open street map to hold all the wrongly tagged misspellings. And so what we will do is to enable full text search on all the comments and all everything so you can actually hashtag things. So if you want to later on you could actually put in open street map hashtags into the comments somewhere and be able to filter out everything that goes that way. So we are kind of like taking the Twitter approach there. We think we are not finished thinking there. Thank you. Okay. Maybe we have time for another short question. I got another one. So you don't really know exactly for one image the direction of the image. If it is provided we know. However, the truth from mobile phones and so is a very salty one. So we normally use the reconstruction to also determine better the direction. So normally when you have the camera steady into one direction like relative to the moving direction then this is normally a far better source of image direction than anything else. Because compass is drift especially in cars where there is a lot of metal around it. It's basically useless unless you have special measures. So what I would suggest is to lock the compass direction into forward direction. However, we can with semantic learning for instance turn the images back when we detect like okay there is an image sky is on the lower side. So this is probably a rotated image. We can rotate it back and we can also say that all the compass says you are going there. You are viewing there but the big post tower is on the other side. So we turn this image in the direction. But then we need to have sufficiently good data surrounding that from just one sequence you can't really say it because you have no existing model. And when available could you make that available in the API as well because it's not available right now. Sorry. The direction of the image is currently not available in the API. Yes it is. It's called CA compass angle. But you don't have world pitch information. We currently don't hold that actually. So in the axis there is image direction, a tag. That's what we use. And I think on iOS we submit actually the yaw pitch direction of the accelerometer. So that might be available but we are not using it right now. That's another thing to improve for the future to actually have a height model also for like drone images and so on. We are not going there right now. We have a lot to do with just ground level imagery. But eventually that will come. Also the concept like an open street map of levels. So a bridge has like two levels. One is down there and one is up there. So the images don't match. So that kind of things. Okay. Before any changes you could just to dive in.
This talk is going to give an overview of the different data endpoints of Mapillary, for example images, object detections (e.g. street signs), objects and vector tiles. We will look at different integrations like OpenStreetMap iD editor, JOSM, Wiki Loves Monuments and others using portions of this data to improve or document physical spaces. Also, the talk will cover different Open Source integration libs like OpenSfM, MapillaryJS and the iD editor.
10.5446/20253 (DOI)
Alors, pour le moment, je veux remercier l'organisateur pour ces invitations. C'est toujours un plaisir d'être ici. C'est un endroit très impressionnant. Il faut dire que c'est un joint-work avec Francis, vous le savez. Et comme c'est pensé, il va être sur la optimisation stochastique de la 0 soldat. La main assumption est qu'on va focusser sur les fonctions. Nous voulons faire l'optimisation des fonctions convexes qui sont très, très smooth. Vous le verrez plus tard ce que je veux dire. Et voilà, c'est tout. Ok, donc, c'est notre motivation. Comme le titre suggère, vous avez beaucoup de keywords dans ce titre. Toutes ces keywords ont un sens différent. Et ce que nous voulons faire, c'est une optimisation classique de convexes. C'est la main idée. Nous avons une map F, qui est convex, bien sûr. Et nous voulons optimiser ça. Et nous allons assumer des régularités sur cette fonction. C'est typiquement que c'est soit smooth ou non smooth. Je ne sais pas si tout le monde est très familiar avec la vulnérabilisation de convexes. Nous vous souhaitons juste vous rappeler sur toutes les keywords que nous allons utiliser. La map F peut être qu'on ne soit pas smooth si nous avons des King's sur ça. Donc, par exemple, c'est comme un Inglos. Ou ça peut être smooth. Si vous avez des secondes réglaratives, vous avez les secondes réglaratives. Mais typiquement, en machine learning, quand vous regardez la fonction que vous voulez optimiser, si vous avez la logistique régulation ou si vous voulez minimiser la norme square, votre fonction n'est pas seulement smooth. Si vous computez les secondes réglaratives, elles sont brandées. Elles ne sont pas seulement smooth, elles sont vraiment, vraiment smooth. Par exemple, si vous avez les secondes réglaratives, vous pouvez, oui, vous avez un cliquant, si vous avez les secondes réglaratives, elles sont brandées. Les terres réglaratives sont 0. Et ce sont les mêmes choses pour la logistique régulation. Donc ici, on dit que, ok, nous typiquement, on va essayer de aimer, on aimer à la fonction optimale, la fonction convexe, que sont smooth, mais on n'utilise pas les choses que sont vraiment smooth. La question est, peut-on utiliser cet élevé de la smoothness pour improving les rates de convergence? Et bien, c'est la principale soucis de cette question. Donc, comme je disais, nous allons utiliser, un look at stochastic optimisation. Ça veut dire que, quand vous avez des queries, vous avez des feedbacks noisiers. Par exemple, on s'assume que la map F vous voulez optimiser est la distance de l'acquérance, la distance de l'explication entre la distance de la tâche de la tâche et la tâche de la tâche de la tâche de la tâche. En fait, vous n'avez pas d'accès à la tâche de la tâche, vous n'avez pas d'accès à la tâche de la tâche de la tâche de la tâche et vous n'avez pas d'accès à la tâche de la tâche de la tâche de la tâche et vous n'avez pas d'accès à cette valeur, vous n'avez pas d'accès à la tâche de quelque chose, parce que vous n'avez qu'une estimate, vous avez un certain bruit de bruit, vous avez un certain feedback sur cette map. Donc, vous avez juste observé ou compuyez la norme entre titharat et tithat et vous n'avez pas d'accès à cette expectation. C'est de cette façon que la tâche stochasticité s'applique. Nous avons un bruit. Et donc, c'est notre objectif. Donc, comme je l'ai dit, nous voulons optimiser la tâche stochasticité de la fonction de la fonction de la tâche stochasticité. Donc, c'est un algorithm de optimisation de ce qu'on fait. Vous avez une map F que vous voulez minimiser. Vous avez un set constrain, X, un subset de RD que vous savez. Et on va voir comment ça marche. Donc, quand vous faites l'optimisation, vous allez clarifier le premier point. Vous allez dire, OK, quelle est la valeur de la tâche stochasticité de la tâche stochasticité de la tâche stochasticité? Et puis, vous vous avez un feedback. Le feedback que vous pouvez obtenir, il peut être différent, selon votre compétition de pouvoir. Par exemple, vous pouvez obtenir l'assumement de F. Et puis, vous pouvez faire un step de Newton. Newton déthin. Si vous faites le méthode de Newton, vous pouvez le faire. Ou peut-être, si vous avez moins de compétition de pouvoir, vous n'aurez que l'accès à un gradient de F. Vous pouvez compter le gradient de votre fonction. Ou, si vous avez moins de compétition de pouvoir, peut-être que vous avez juste la valeur de F. Pour votre assumption, chaque fois que vous obtenez la valeur de quelque chose, c'est le noyau. Donc, vous obtenez la valeur de F plus du noyau, ou le gradient de F plus du noyau, ou le gradient de la tâche stochasticité plus du noyau. Et ici, le titre suggère que vous allez regarder la méthode 0.0. 0.0 signifie que vous n'avez pas accès au gradient de la tâche stochasticité parce que c'est trop difficile à compter. Vous ne pouvez pas le faire. Vous n'aurez juste l'accès au point de faire la première courrie, x1. Vous obtenez une réponse, la valeur de F à x1 plus du noyau. Dès ce que vous allez faire, vous allez ajouter ce que vous pensez, c'est le minimum de votre maquillage F. Et je vous le dise x2. x2 est ce que vous pensez, c'est le minimum de F. Si vous êtes heureux et que vous pouvez courir un deuxième point, vous pouvez faire un deuxième courir, alors vous pourrez faire ce deuxième courir, x2. Dans cet exemple, ffx2 plus x2, vous allez mettre x3, ce que vous pensez, c'est le minimum de F et de tout. Et l'idée d'optimisation, c'est de minimiser votre erreur d'optimisation, ce qui est le différence entre ce que vous pensez, c'est le minimum de F après t-stapes. C'est x1 plus x1 minus le minimum de F. C'est ce que nous allons faire. Nous allons faire un bon algorithm, d'optimisation d'algorithme, pour que ça soit solvable. Pourquoi je me suis évoqué toute cette communication d'optimisation, c'est parce que le titre, le premier mot de le titre, je pense, peut-être pas le premier, je ne sais pas quel nom, c'est le dernier mot, le titre de la langue, c'est online. Donc, ce qu'est-ce que l'optimisation online est la plus classique de la conversation d'optimisation, c'est que la map F, que vous voulez optimiser, c'est la map de la campagne, mais il peut changer deux fois. Ce n'est pas toujours le même, mais quand vous faites la première query, ce sera F1, puis F2, F3, F4 et tout. Et typiquement, on a un second de la map, on n'a pas juste un mapping, on a un second de la map, F1, F2, F3 et tout. Et quand vous faites la query X1, vous avez un point de vue de la compétition de la map F1, ou la compétition de la map F1, ou la compétition de la map F1 et F1. Mais ici, dans les conditions, on va assumer que nous pouvons seulement compter la valeur de la map F1, cela signifie que vous avez la compétition de la map F1 et F1. Donc, la compétition est la même, vous créez la map F1 et F1, vous vous outputtez la map F2, ce qui est la minimum de F2, et puis vous créez la map F2, vous avez la compétition de la map F2, vous vous outputerez la map F3 et tout. Et en classique l'optimisation de la compétition de la map F1, vous avez la minimum de la function de la map F1, ici, nous allons regarder dans ce scénario de la critère, la crée plus délicate que celle-ci, qui est appelée la regret. On a parlé de cela dans la dernière chanson. La regret est ceci. Et typiquement, vous avez la seconde de la map F1, F2, F2, F2 et Ft, et vous comprez la minimum de tous vos x de la loss de la loss de la x star. Vous avez la loss de la loss de la loss avarage, et vous regardez à ce point de minimiser la loss avarage. Vous comparez cette quantité à la loss de la loss d'explication de la tue x star t à la stage t. C'est la différence entre la loss de la loss de la plus délicate et la plus délicate de la loss avarage, la function F, vous predicterez la stage x star. Donc ici, c'est juste un discrétisme. Ce n'est pas exactement un bandit, pour ceux qui savent ce qu'est la bandit de la conversion. Ce n'est pas un bandit, parce que dans un bandit, on assume que la tue x star t a dû être dans le set constrain. Je peux créer ma fonction n'importe quel point que je veux. Ce n'est pas dans le set constrain. Donc ici, ceci était juste pour définir ce qu'est la optimization online. Mais pour être précis, je ne parlerai pas de la utilisation online tout à l'heure. C'est juste pour dire que je peux le faire. Et en fait, c'est le résultat. Tout le résultat que nous avons dans la utilisation de la conversion classique s'y trouve pour la utilisation online, pour 3. Et typiquement, nous avons juste vu un slide de prouves et vous voyez pourquoi tout ce que je disais, toutes les résultats dans la utilisation de la conversion s'y trouve pour la utilisation online. Et en fait, pour être plus précis, vous verrez pourquoi la utilisation online est plus simple que la utilisation en ce sens. Ok. Donc, dans le titre, nous avons, je vous ai dit que c'était un long titre. Ici, je n'ai juste parlé de la utilisation de conversion online, de la utilisation stochastique. Dans le titre, je suis fait une assumption précise. Le premier est donc, dans le titre, la utilisation d'utilisation de conversion et nous allons aussi faire une autre assumption sur la utilisation de conversion. Donc, ce n'est pas classique dans la utilisation de conversion. Nous pouvons imaginer que nous avons deux types de assumptions qui nous aideront à résoudre un problème de conversion. Le premier type de assumption est que F est mu-strongly convex et le second est F e2 smooth. Comme vous pouvez le voir, c'est plus ou moins la même association que la seule signée, ici, est plus plus grand ou moins grand. Donc, c'est la condition duale. Le premier est ce que nous appelons mu-strongly convex. Et typiquement, c'est que, dans une dimension, votre mécanisme convex va être un mécanisme convex si le deuxième délívatif est plus grand que le mu. Vous savez que chaque mécanisme convex est, parmi les conditions 0, mécanisme convex. Mais ici, quand vous êtes mécanisme convex, ou quand vous êtes plus que mécanisme convex, vous avez le droit de l'optimisation parce que vous pouvez voir que votre mécanisme F est moins bondé par le terme qualitaire. Le deuxième type d'assumption que nous allons faire ou pas, selon le nom, est la smoothness. Et typiquement, dans la lecture, quand vous regardez, ce qui est commun, il faut regarder la smoothness. Cela signifie que votre gradient est de la haute-chirque ou de la haute-chirque que vous pouvez, en plus, apparaître votre mécanisme F Donc, si vous êtes, en même temps, le mu-sromé convex est de la smoothness, votre mécanisme F est moins bondé par le terme qualitaire et moins bondé par le terme qualitaire. Et les deux assumptions vous aideront et vous pouvez avoir une meilleure convergence de rate. Et la question est, peut-on utiliser cette sorte d'assumption dans la question de la combination de la combination de rate de rate de rate de rate? Oui? Et par exemple, vous avez un point de vue d'une lente, ou d'une lente? Pourquoi vous avez une lente ici? Parce que, sur le prochain slide, en faisant deux, je vais avoir une différence, comme une lente 3 et je vais mettre une lente 3 ici. Juste pour les problèmes de la scale. Qu'est-ce que vous pourriez dire? Francis est très froid de la homogeneité de votre m. Parce que, si... Pourquoi vous avez une lente? Parce que, pourquoi avez-vous une lente? Vous avez une lente? Parce que ici... Parce que ici... Ok, je n'ai pas... Ok, donc je veux dire, la raison est que je ne vais pas regarder la très forte de la fonction de la convergence, où vous pouvez, sur la map, mettre une lente de la m. Mais si nous faisions ça, comme une lente de la m. par quelque chose de la m. Si nous faisions ça, nous n'avons pas de la m. Ok? Mais c'est une question très bonne. On va trouver une autre notation, et c'est compliqué. Ok, donc si vous voulez juste prendre votre m, pour être un m². Ok? C'est une autre... une réponse crappée. Ok? Donc, pour les gens qui ne sont pas utilisés pour la quantité de la quantité de la m. Donc, on veut optimiser la map en noir, en f. Et être smooth, signifie que vous avez une formule de quantité. Donc, cette formule est une formule de quantité, par contre, sur votre f. Et être strongly convex, signifie que vous avez une autre formule de quantité en rouge, sous-le. Et pour votre cramure, je peux dire que c'est parce que c'est un parallèle. Mais si vous êtes où je suis, c'est vraiment correct. Et donc, c'est pour 20 ans. Donc, je dois speed up. Donc, juste quelques mots sur ce, pour ceux qui ne sont pas très familiar avec la quantité de convexité. Si vous êtes à x, si vous êtes en x et que vous savez que la map est trop smooth, vous pouvez encore faire un grand jump parce que vous savez que vous pouvez encore faire un point, et si vous créez en y, si vous êtes à x et que vous créez en y, vous savez que votre erreur est de diminuer par cette quantité. Donc, vous pouvez faire des jumps. Et si vous êtes mostrant convex, il vous dit que vous êtes en bas bondé par une termes de qualité, il vous dit que si vous pensez que le minimum est en x, il ne sera pas aussi loin, parce que vous savez que le minimum est de la partie de la plane. Vous pouvez tromper une grande partie de la plane, où vous savez que le minimum ne sera pas loin. Donc, être strongly convex aide vous aussi à être trop smooth. Vous ne vous avez besoin de speed up. Vous ne vous avez besoin de speed up, je fais une jante à 4, 20, pour pouvoir aller au point de smooth. Ok, désolé. Même si vous avez des fonctions qualitaires, vous avez honte. Oui. De toute façon, c'est bon, pourquoi vous avez deux? Parce que la notion classique de smoothness est de être trop smooth. De être trop smooth signifie que votre seconde délégative est bondée par m2 square. Et ça signifie que la différence entre f et votre expansion de degrés 1 est un polynomial de plus ou plus 2. Donc, ici, nous allons dire que f est bita smooth avec bita plus que 2. Si nous avons la même chose, mais qu'on regarde la expansion de degrés 1 nous allons regarder la expansion de degrés f, bita minus 1. Et être bita smooth signifie que la différence entre f et sa expansion est plus petite que y minus x de bita. Et si vous ne savez pas quelle est la expansion de degrés en haute dimension, comme je l'ai fait, c'est le formula, mais nous allons le faire. Donc, juste記得 que être bita smooth est un formula qui est très important pour l'expansion de la taille. Et c'est une assumption qui, pour la fonction, nous allons optimiser en machine learning. On pense que c'est un square norme, ou un logit. Donc, le but est de utiliser le fait que nous savons que f est vraiment, vraiment smooth, bita smooth, pas seulement trop smooth, pour augmenter la dimension de la taille. Juste un peu de mots sur cette assumption, en étant bita smooth. Donc, c'est un peu, si vous regardez cette définition, si vous êtes 0 smooth, ça signifie que vous êtes bondé. Donc, si vous avez convoqué un bondé, bien sûr, il doit être dans le compact ou alors que vous êtes constant. Donc, si vous êtes 0 smooth, ça signifie que vous êtes bondé par m0. Si vous êtes m0 à 0, si vous êtes 1 smooth, ça signifie que vous êtes les Bits, les Bits m1. Vous avez une mapping, c'est bita1 et bita2, smooth, avec bita1 smaller que bita2. Et puis, vous êtes smooth pour toute la valeur de bita entre bita1 et bita2. Et pour la fonction que je vous ai mentionnée, pour la logit progression, ou toute la norme de qualité, puis vous pouvez compter toutes ces explications, vous pouvez compter explications, toutes ces mappings, toutes ces quantités, et bita2 et bita. Ok, pour l'exemple, pour la logit progression, et bita, c'est uniquement lignal en bita. Donc c'est assez petit. Ok, donc c'est juste pour vous mentionner cette assumption. Et ce que nous allons faire, et ce sont nos objectifs. Donc, avant de vous décrire ce sont nos objectifs, nous allons faire un peu de review de la littérature de la méthode classique de la optimization. Donc, la première, si vous faites une optimization de la convection, sans bruit et avec accès à la rédiance, donc ce n'est pas tout ce que nous avons regardé ici, mais juste pour avoir des intuitions. Donc, si nous regardons, donc vous avez accès à la rédiance, donc vous regardez la première méthode, et il n'y a pas de bruit, nous pouvons trouver un optimal, et je mets l'air court sur cet optimal méthode, pour minimiser la convection, qui est la méthode de la élémentation. Donc pour ceux qui ne le connaissent pas, le point F est dans cette élémentation, dans la bleue, puis vous créez le centre de votre élémentation, vous avez un bruit, et vous savez que le minimum de F ne sera pas dans la partie gauche de cette élémentation. Donc vous pouvez juste changer cette partie de l'élémentation, et vous verrez que l'optimum est en partie droite de l'élémentation. Et la méthode de la élémentation, c'est que, en fait, en regardant cette forme bizarre ici, vous pouvez expliquer comme un autre bruit de l'élémentation qui contient ça, et je sais que à la prochaine étape, mon minimum est dans la bleue élémentation ici. Et la plus grande est la centre de l'élémentation, check, alpha 8, alpha 8, etc. Et si vous regardez cet algorithme, toutes les séquences de l'élémentation, vous allez être comme ça, la volume se diminue par un facteur constant, et donc vous convergerez à un niveau de l'élémentation. Mais ça a à être sans bruit. Si vous avez bruit, ce n'est pas fonctionnel, donc vous avez à créer plusieurs fois le même point, et il ne va jamais marcher, ou presque jamais, je ne peux jamais dire jamais, il ne va pas marcher avec l'utilisation online. Donc, nous ne serons pas à ce genre de techniques. D'ailleurs, nous allons regarder l'utilisation de l'élémentation. Parce que l'utilisation de l'élémentation est une méthode gradiente. Je ne sais pas si quelqu'un le sait, mais l'algorithme est celui-là, donc vous avez un constrain de z x, et la méthode gradiente, si vous n'avez pas de bruit et que vous avez accès à l'élémentation, c'est exactement cette formule, que l'élémentation x t est x t minus, et vous êtes en train de faire un step gradiente, donc vous vous créez un stage t plus 1, x t minus eta Times l'élémentation f à x t. Et si vous allez au-delà de votre constrain de z x, et vous vous vous projectez en x, si vous pouvez faire un step gradiente, vous avez un point de vue très important. Et vous pouvez avoir une expérience de traitement de gradients. Si vous n'avez pas d'assumption sur votre map F, acceptez de la smooth, et de si vous n'avez pas de bruit et si vous avez accès à l'élémentation. Si votre mapping est smooth, n'avez pas d'assumption que de smooth, c'est à dire que vous avez des lits de traitement de gradients, c'est à dire que vous avez des lits de traitement de gradients, comme smoothness et de smoothness, les lits de traitement de gradients vont être plus vite et plus vite, parce que c'est le plus haut. Donc, 2 smooths, vous allez de 1 à 1 de la tige de 1 à 1 de la tige, qui est assez, c'est plus vite, et vous pouvez même accélérer cet algorithme par utiliser un autre algorithme et obtenir les lits de traitement de gradients dans 1 à 1 de la tige. Donc, si vous avez une smooth, mais vous avez l'assumption que vous avez très vite, vous avez un lits de traitement de gradients dans 1 à 1 de la tige, mais en fait, 1 à 1 de la tige. Et si vous avez 2 assumptions, si vous avez 2 smooths et 1 de la tige, la convergence de traitement de gradients est de l'inévité. Et ça vous dit que quand vous avez une smoothness et une tige de traitement de gradients, vous pouvez s'en converger plus vite que sans lits de traitement de gradients. Donc, notre idée est de faire le même avec une stochastique optimisation avec 0 sord de feedback. Donc, 0 sord de feedback signifie, encore une fois, que vous n'avez pas accès aux gradients, vous avez juste accès à un point, à la fonction de valeur de f à un point. Donc, maintenant, si vous n'avez pas de bruit, il y a un argument trivial qui est inéditable, qui dit que si vous êtes dans une dimension et vous essayez d'estimer le gradient de f, vous avez juste à créer f de x plus delta minus f de x minus delta plus delta, et si delta est equal à 0, ou très très petit, c'est exactement le gradient de f. C'est tout de même correct. Mais, vous pouvez dire que vous pouvez obtenir le gradient de f juste par utilisant deux decrérés, ou decrérés, selon votre algorithme. Et donc, le rate de gradients sera la même, d'exemple que t, qui est transformé par t divided par d. Donc, c'est correct, quand vous ne n'avez pas de bruit, mais comme vous avez bruit, tout ce qui est non plus vrai quand vous avez bruit, parce que quand vous avez bruit, vous avez un epsilon ici, et vous divisez par delta, et donc cela se bruit. Donc, cette idée est certainement prête quand vous n'avez pas de bruit, mais quand vous avez bruit, vous n'avez certainement pas de bruit avec delta equals 0. Il y a dans la méthode 0, il y a un moyen de se solver un problème avec une réveillement, donc ici, vous essayez de mémiquer la méthode 1ère avec 0 mais il y a un moyen de solver la utilisation de convexes avec une réveillement de 0. Et dans une dimension, c'est assez simple, donc, on peut dire que vous coupez ces trois points ici, vous coupez les gauche, le milieu et les right, et si vous voyez ça sans bruit, si vous voyez que la valeur de votre fonction est comme ça, donc le milieu est le plus plus bas que vous avez eu. Et vous voyez que la valeur de votre fonction est la plus bas que vous avez eu. Donc, vous pouvez retirer la partie droite et la partie gauche. Ici, si les 3 valeurs de votre point de crée sont augmentées, vous savez que la valeur de la même valeur est la plus bas que vous avez ici. Donc, typiquement, ici, quand vous faites 3 crées à la fois, vous pouvez couper la place restante dans un 2. Et ça, c'est vrai dans une dimension, vous pouvez faire ça parce que vous pouvez faire de la recherche de la dimension ou c'est facile, mais en fait, vous pouvez faire ça dans une dimension de la haute, et vous allez avoir une valeur de la valeur de la résistance, mais l'issue est que la valeur de la résistance est de la tée divided par d'à 7, qui est assez plus lent quand la tée est grande. Donc, on ne va pas utiliser ce type de méthode, mais on a coulé, mais on ne va pas le faire. Ici, le message de la tée de la tée est en fait ce que j'ai écrit ici, c'est que quand vous allez d'abord de la première ordre à la méthode, vous multipliez votre valeur de la résistance par d'une, typiquement, parce que à l'aide d'une crée pour avoir un gradient, vous avez besoin d'une décrédients pour avoir un gradient, un estimate de l'agent. Donc, d'abord de la première à la dernière ordre, vous multipliez votre valeur par d'une. Et si vous regardez tous les algorithmes de la décréditation, comme l'élysée, les pyramides et tout, vous devez créer plusieurs fois à la même point pour réduire la variance. Ce n'est pas très intéressant et pas vraiment vraiment fonctionnel. Donc, à l'aide de la décrédients de la stochastique, on va dire que, en observant votre gradient, de la date de F, vous avez une version de cette version de cette gradient. Et la décrédients de la stochastique est exactement la même que la décente de gradient, d'exemple que vous ne vous observez plus que la version de votre gradient. Et il y a une légère de la légère de la légère sur ce problème. Cela dit que si vous avez un estimate de votre gradient, c'est-à-dire que si la expectation de c est 0 et que la variance est montée dans la square, vous voyez que la square est ici, vous pouvez obtenir une décision de la décision de la square root de d'Overtif, si nous avons une fonction non-streuglée de la décision de la décision de la décision de la décision de la décision de la flavsemіш election où la torse Creative va nous répéter avec la distance, Binonopard d'Overtif s'abandonne consistant de sa unconditionalité. A et bien han le vert de lauvre l' conjunto breaks use la d are l 게 disturbance Cest cela qu'on il y a un point Winds un, des doigts de causes correcte d. Donc quand vous allez de 0 à 1, vous multipliez votre résumé de règle de d, quand vous vous multipliez de noise less à noise d, vous multipliez votre résumé de règle de règle de d. Donc la question est ce qui arrive quand vous vous multipliez de 1 à noise less à noise d, alors que nous espérons que nous multipliez la résumé de règle de d et d, donc de d². Et c'est exactement ce que nous sommes achetés. Donc, ceiblesong est juste des summary sur ce que l'on sait, donc cuma trois colonnes, on sait que le reste de les regions que nous sommes rassurés flécurent. C'est ce que nous étions achetée, alors si nous pre Zheng,faite, non jug qualifying, plus de range de solain d'un jouet d pluck dupé type non de launch, nous serons within de la qu inventation m lequel C'est ce que nous allons obtenir. C'est ce que nous avons, ce n'est pas le même, je l'avoue, mais c'est plutôt le même, car comme vous pouvez le voir, quand beta est vraiment très grande, il faut que beta soit infinitif, comme dans la norme square, ou dans le logite, vous voyez que, quand beta va à l'infinité, beta minus 1, beta plus 1, beta plus 1, beta plus 1, beta plus 1, beta plus 1, ce que nous avons, c'est que nous avons des adjoints, nous avons les objectifs. Ce n'est pas très bien quand beta est petit, donc on s'assume que la mapping est trop smooth, mais pas très smooth, ce n'est pas très bien, mais quand la mapping est vraiment smooth, nous récoversons les adjoints que nous sommes aimés à. Ceux-ci. Et juste pour finir, le... le review de l'adjoint, qu'est-ce qu'on sait de ce problème, donc, sur 0, il y a une optimization noisée. Donc, nous savons, nous savons que ce score de T est le speed optimal. Nous savons que nous pouvons trouver un algorithme, pour que nous prenons un adjoint de R. et de D, mais l'issue est que nous ne savons pas la dépendance de D. Et ici, nous avons hésité pour l'instant, dans le convex, en prenant une pre-D, nous avons vraiment hésité à avoir seulement D. Donc, si nous avons l'assumption, si nous supprimons que la mapping est très smooth, nous avons donc un D² over T. C'est un papier dans 14, je ne le remercie pas. Si nous avons seulement une d'un ou deux assumptions, très smooth ou convex, nous avons donc un adjoint de R à minus 1 third, si vous avez seulement un convex T à minus 1 fourth, et, en fait, si vous avez seulement un convex, vous avez un adjoint de T, mais multipliez par log T à D. Et c'est... Vous pouvez convenir que quand vous regardez le terme, le terme domino est pas squaute de T, c'est log D à T. Log est très petit, mais quand vous multipliez le D, c'est très grand. Et je pense que quand D est 10, pas mal que T soit 30, c'est plus plus grand que T. Non, pas majorité. Pourquoi? Parce que, je veux dire, juste faites la computation. Ok? Vous avez... C'est vrai pour tous les humains T. Donc, si vous avez T à être infinité, c'est pas vrai. Mais si vous avez T, je pense que si vous avez T à être 100, c'est log T à dominer. Ok, donc c'est pour... Pour toutes les T que vous pouvez imaginer. Le nombre de fantais de côtes dans le univers est un... La quantité des secondes que Spence est un grand grand, je ne sais pas... Tu peux... Oui, mais je ne le sois pas, parce que c'est une log, ok? Donc, vous n'avez pas de... C'est homogénial. Je ne sais pas quel grand T est, je ne sais pas si c'est le nombre de T que vous pouvez imaginer. C'est une log T que vous dominiez, mais c'est un grand grand défi. Et... Donc, quand vous avez un peu de smouth, il y a un papier de Pauliakon-Cybekov en 1999, qui a un peu plus de la même rate de courte, mais en asymptotique et sans la dépendance T. Il y a quelque chose qui dépend de T². Ok, donc c'est nos objectifs. Pour obtenir ces adjointes, nous ne serons pas capables de les obtenir, mais nous pouvons être assez close. Comment ça vous? Un heure, 10 minutes? 5 minutes? Je n'ai rien. 20? Ok. Donc, on va faire ça, il y a deux trucs qui vont être combinés. Et le premier est plus... que ce que nous savons. C'est juste un moyen de estimer les adjointes. Et le autre est un moyen de smoother les mappings plus mignées qui sont déjà très smooth. Donc, ici, nous voulons... nous voulons utiliser un métal de adjoint pour notre problème, mais l'issue est que nous n'avons pas accès aux adjointes, nous n'avons juste accès aux valeurs de la valeur de ffx. Donc, nous avons accès à ffx plus de la noise. Mais nous avons une estimation naturelle de l'adjoint que je vous ai parlé avant. Nous savons que, dans une dimension, ffx plus de la noise minus ffx plus de la noise est plus ou moins de la noise ffx. Si la noise est très petite, c'est vrai. Et quand la noise est un peu petite, c'est presque vrai. Mais la chose qui est assez surprise est que, en fait, cette différence ici est exactement l'adjoint de la fonction f, delta, qui est presque f. Donc, cette chose est presque l'adjoint de f, mais c'est l'adjoint de la mappin qui est presque f. Et c'est l'adjoint de f' prime de delta, où f' prime est cette fonction. Ok? Et si vous regardez c'est juste l'expectation d'ffx plus de la noise ffx sur la balle. Et donc, si nous étions en minimisation de f' delta, nous aurions un estimateur non bias de f, de f' de f' delta simplement par en dédouer f' plus ou moins 1, plus ou moins 1, plus ou moins 1, 1, 1, 1, 1, et en plugger g est f de x plus f' delta times f' d'd, f' delta. Parce que ce gars ici, quand il prend l'expectation de f' de ce truc, est exactement ceci. Donc, l'expectation de f' de g est exactement le gradient de f' prime. Donc, c'est non bias de f' prime de delta. Donc, c'est non bias et si vous regardez la variante de votre estimateur g, c'est le cas de l'autre de 1 de delta square. Donc, maintenant, si vous voulez faire un gradient stochastique de descent avec respect à f' delta, puisque c'est convex, les rate de convergence vont être 1 à l'équipe de t times la déviation standard de votre estimateur. Donc, c'est 1 à l'équipe de delta. Donc, c'est 1 à l'équipe de delta, à l'équipe de t. Mais ici, vous optimizez f' delta à l'intérieur de l'optimisation f. Puis, puisque c'est une très close, c'est une close delta, votre erreur va être de l'ordre de delta plus 1 de delta square de t. Et si vous optimizez delta, vous allez avoir 1 à l'équipe de t à l'équipe de 1 à l'équipe de 4. Et c'est exactement l'idée que je m'ai mentionnée ici de Flaxman et de l'électorat pour faire ça. Et c'est l'idée de Nimozki et de Nudin. Exactement. Mais je pense que... Je me souviens que c'est... Donc, cette idée ici est en fait, par rapport à Nimozki et de Nudin, comme Francis dit. Donc, si nous allons de dimension 1 à dimension d, en fait, nous avons la même idée qui dit que si nous regardons l'expectation de ce gars ici, sur les unies, nous sommes en dimension 1, nous sommes en train de sampleur sur plus 1 à plus 1. Donc, c'est en train de sampleur sur l'unie sphère de dimension 1. Donc, si nous regardons l'unie sphère de dimension d, nous allons prendre le même gars, alors que l'expectation de cette chose est exactement le gradient de f delta. Donc, c'est presque le gradient de f, mais c'est exactement le gradient de la mappin qui est presque f. Et donc, c'est le gradient de f delta, où f delta est l'expectation du bol unit de presque f. Et si nous faisons la même computation, vous voyez que la variance de cet estimateur est bondée par d² à delta². Et donc, nous devons prendre exactement le même truc, la même computation ici, exactement le même, vous avez la bonne réaction de d² à delta² à la force de 1⁴. Donc, comme Franck dit, c'est un résultat d'une idée par rapport à la marque de ski et de Nudin, et puis, Flaksman et Al, et puis, l'utilisation d'un biais à Azan, et l'utilisation d'un nombre de gens, donc nous n'avons pas de clé à l'idée de ce qu'on a, pas de tout. Nous avons juste à l'utiliser. La idée est de utiliser un biais de la marque de ski. Et le biais est de utiliser un kernel pour régler notre map. Pourquoi nous allons faire ça? Parce que, vous vous souvenez, notre map était très, très, très smooth. Ça veut dire que notre map f, nous ne nous souvenons pas que f est très très proche de la expansion de la télérance. Donc, f de x plus de la petite r est très proche de la expansion de la télérance, donc c'est très proche de la prénome dans la r. Nous allons introduire un kernel, donc une fonction k, comme ça, quand vous intégrer k contre r, vous avez 1, et vous intégrer k contre tous les autres prénoms, vous avez 0. Vous pouvez avoir une forme explique pour cette, pour cette sorte de kernel, pour ici, vous pouvez les compter. Vous avez une forme explique. Pourquoi nous devons faire ça? Parce que si vous utilisez ce kernel, et vous intégrer f contre r times k, vous voyez que ce gars ici est très proche de f. Ok, donc nous avons un bon estimateur de f, qui est là-bas pour le delta à la beta, en invité d'avoir un estimateur comme avant, quand vous utilisez cette sorte de technique, nous avons un estimateur de la order de delta, ici nous avons un estimateur de la order de delta à la beta de f. Donc nous pouvons utiliser le fact que f est très très douloureux pour smoother nos estimateurs, encore en multipliant par k, et avoir une très précise expression de f, en invité de delta à la beta. Donc si la beta est plus infinitée, c'est tout de même assez petit. Donc c'est l'idée de utiliser cet external. Et si nous combinsons les deux frics, si nous allons utiliser une approximation de gradients de la fonction smooth. Ok, vous vous souvenez, le fric de Demirowski a dit que nous pouvons construire un estimateur de la fonction non-biased qui est close à f. Et ici nous allons faire la même chose, d'excepter que la fonction qui est close à f va être plus近 à f que la standard estimateur. Donc le fait de utiliser l'estimateur est assez simple, de faire la même chose. Donc vous vous samplez la v sur la balle unit, et vous vous récompriserez f plus r delta v, et puis vous multipliez par l'air, donc vous vous intégreriez ceci contre votre canal. Et si vous regardez cet homme ici, ok, c'est la fonction, et si vous vous récompriserez le gradient de cette fonction, le gradient de f de r delta est exactement donné par cette formule. Donc ça vous donne un estimateur non-biased de f r delta. Et ce estimateur non-biased donc ces mappings sont assez intéressantes parce que je dis que avant f delta est pas seulement close à f, delta est close, mais c'est delta à la beta. Donc ici nous avons une mappings qui est vraiment très très close à f, et les gradients sont aussi très très close. Le gradient de f r delta est très très close à la gradient de f, à la delta, à la beta minus 1. Et c'est ce que le canal nous permet de faire. Et si vous computez le canal avec ces expressions, vous pouvez computer tous ces termes, ici, numériquement. Et c'est seulement quelque chose comme la beta2 ou la beta, peut-être, pour les 3. Ok, donc ici, la seule chose qui est un peu difficile, la main difficulté de cette technique est que les mappings que nous avons définies, depuis que nous utilisons le canal, ce n'est pas nécessairement de convaincre. Donc c'est un peu de bummer. Mais c'est toujours mu de la connexation, ou bien, de la connexation de mu2, f est de la connexation de mu, et c'est toujours de la connexation si la beta est equal à 2. Mais toujours, on peut toujours utiliser la technique pour improving la circulation. Nous allons juste perdre un peu de pas surence et quand-même, on peut believer dans d'autres réponses. On nous Mercury, oui好啦,背ait l en convex, puis le data fR n'est pas nécessairement convex. Mais ce n'est pas une majorité d'exploitation, juste perdre un bit dans la vitesse de correction. C'est bien. Encore une bitha, je veux dire, c'est bien. Excusez-moi, et la bitha est moins que l'un ou deux, mais c'est intéressé par la fonction de l'inverse? Donc ici, oui, ce n'est pas le point de la décision. C'est juste un remarque. Si la bitha est deux, alors c'est convex. Mais si nous sommes en train de faire un bit à 3, 4 ou 5, en fait, à l'infinité, ce n'est pas convex. Mais encore une fois, ici, c'est juste un remarque. Et ce n'est pas très faible pour notre problème. Exactement. Donc, on va utiliser les deux bithes. Et ce que nous allons faire, nous allons juste faire un décent de stochastique, en respectant ce estimateur de notre gradient. Et donc, nous avons deux principaux algorithmes. L'un, quand la set de confrétation est compacte, nous pouvons créer une perte à chaque stage. Nous ne devons que faire une perte par stage. Et nous faisons ce décent de gradient. Donc, c'est xt minus 1 plus minus de step size par le estimateur de la gradient, de la proche de la approximation de f. Et l'autre algorithme que nous allons utiliser, c'est quand x est incontrôlée, donc, on dirait que c'est un set de la set, pour être hardy, x est hardy. Et à chaque stage, nous allons créer deux courriers. F de x1t et F de x2t, avec un bruit indépendant. C'est crucial parce que si les bruits ne sont pas indépendants, le bruit est de la même bruit à chaque stage, et le bruit se déconseille. Mais ici, on dit que chaque fois que nous faisons une courrier, il y a un bruit indépendant. Donc, le bruit est encore ici. Juste d'être honnête à ce point, si nous voulons faire une notisation online, ça signifie que nous pouvons créer deux fois la même fonction ft, pour utiliser cet algorithme. Donc, il peut être discuté si c'est doable ou pas. Si vous assumez que c'est pas doable, utilisez cet algorithme. La première ne peut pas utiliser le fait que vous pouvez créer deux fois la même fonction. C'est seulement une courrier, seulement une fois la fonction. Donc, vous pouvez faire un constrain si vous pouvez créer plus de courriers. Non. L'issue est que, quand vous créez une courrier, et vous regardez l'optimisation de la constrain, pour une case online, vous avez un intercept qui vous donne. Typiquement. Vous pouvez choisir l'intercepte pour être aussi grand que vous voulez, ou encore plus petit que vous voulez. Et ça vous donne. Quand vous faites deux points, vous pouvez le créer dans l'intercepte et vous pouvez normaliser votre problème. Ce n'est pas vraiment le bruit, c'est vraiment l'intercepte. C'est le principal algorithme. Et ensuite, la choisie de tous les paramètres. Donc, ici, le paramètre va être gamma t, delta t, rt. Toutes ces choix de paramètres vont dépendre de la fonction que nous ferons sur la fonction. Si vous assumez que les fonctions sont muo, strongly convex, ou non, si nous regardons la constrain optimization ou non, la choisie de paramètres va changer. Donc, je ne vais pas déterrir tous ces choix de paramètres. Mais par exemple, si vous regardez la constrain optimization et la constrain optimization, nous avons des choix explicieux de paramètres. Donc, la constrain optimization peut être choisi en termes de 1, la mu t. C'est une choisie classique, en fait, pour la fonction muo. Et la taille de la exploration, parce que c'est une exploration, delta t peut être choisi en ce cas. Et ce n'est pas grand. Et l'outre de l'algorithme va juste mettre l'outre du point de l'avantage, c'est pour l'arrivée. Et cet algorithme donne un erreur avec la scale que nous avons voulu dans la taille de la mu t pour la puissance de 1 quand la taille de la mu t. Donc, nous avons une termes de notation erreur que la scale a la taille de la mu t quand la taille de la mu t est plus infinitée. Donc, quelle est la grapée? delta quand la taille de la mu t? Donc, quand la taille de la mu t, je pense que si je vous le rappelle, c'est comme une constante. Parce qu'ici, nous avons une factorité de beta, c'est la beta sur l'E, donc nous avons la beta sur l'E, donc la delta n'a pas de la taille de la mu t, mais de la taille de la mu t. Je pense que c'est l'ordre de la constante. Donc, typiquement, c'est la constante. Mais la beta, vraiment, vraiment, vraiment, est bien parce que ça signifie que vous êtes vraiment, vraiment, très douloureux et donc vous pouvez créer, alors qu'à l'aide de créer un XT, vous pouvez avoir une preuve précise de l'approvision de votre gradation par créer des points qui sont loin de votre point originaire. OK, pour un mapping, ce qui est un truc, pour exemple, que votre mapping est un ligné, donc ce n'est pas un contexte, mais si c'est un ligné, vous pouvez avoir une bonne approximation de votre gradation par des points qui sont loin du XT, c'est la même chose avec une fonction qui est vraiment, vraiment, vraiment smooth. C'est presque ligné, plus, la idée est ça. Vous pouvez créer loin de votre point pour avoir une preuve précise de votre point. Pour exemple, je pense qu'il y a une autre façon de le voir, c'est si vous regardez dans la forme de la qualité, si vous voulez avoir une preuve précise de votre gradation, de la forme de la qualité, vous pouvez créer points qui sont infinitels plus infinitels, minus infinitels, et cela vous donnera un estimate de votre gradation. OK. C'est la main idée. Maintenant, let's look at the proofs. Donc, vous voyez les next slides, les plus éduits slides vous avez vu dans votre vie, mais c'est juste pour créer un point à la fin de la parole. Donc, c'est la preuve de la preuve précise, de l'algorithme précédente. Donc, vous voyez que c'est seulement six arguments principaux, et c'est vraiment prudent quand vous êtes familiar avec l'exploitation c'est assez naturel. Donc, quand vous regardez à ça, je vais juste vous emphasiser le 5e point. Donc, vous regardez que la X t s'éduite de x, et vous utilisez votre algorithme pour montrer que c'est... vous expliquez et que c'est la forme standard. Ici, vous avez un erreur terme que vous vous abondez par l'exploitation et vous savez que votre quantité ici est petite. Ici, vous avez cette quantité en orange, par l'exploitation, il y a plus ou moins un gradient de f delta r. Et donc, vous pouvez placer votre gradient de f delta r ici, à l'intérieur de ça, et vous avez, en utilisant la grande convexité, vous avez ce type d'inéquité. Donc, regardez les 1res deux termes. Quand vous avez les 1res deux termes ici, vous voyez que vous pouvez réunir ce terme et diviser par gamma t, et pour que vous obteniez à l'end l'arrêt de f delta r. Alors que vous êtes en termes et vous voyez que tout, ici vous avez x t minus x, x t minus 1 minus x, ici vous avez x t minus 1 minus x. Vous faites toute votre chimie et c'est comme standard d'algebra, c'est pas très difficile et vous obtenez cette forme. Et ici, la idée est juste si vous choisissez gamma t pour être un de la mu t, ce terme ici en orange est equal à celui-là ici en orange. Et donc, tous ces termes vont se canceler quand vous allez à l'endroit. Donc, tous ces termes vont se canceler et vous serez à l'endroit avec les termes de tous ces gars. Et c'est exactement ce que nous faisons ici. Donc, vous avez un terme du reste de termes et c'est tout. Donc, maintenant, si vous ne vous followz pas tout, c'est OK. Je ne vous invite pas à suivre les dernières cinq centaines que je fais. Mais c'est juste de dire que je ne vous ai pas traité et nous sommes arrivés ici à ce point de la preuve. Nous sommes ici. OK. Et quand vous êtes à ce stage 5 de la preuve de la optimization de convexes, typiquement, ce que nous disons est que, OK, donc ici, je... Donc, cette quantité est plus ou moins constante. Et ce que j'ai ici est une avantage de fonction de valeur de fonction de convexes. Et je sais que c'est plus grand que la fonction de valeur de la avantage. OK. Donc, je vais utiliser la convexité de F ici pour montrer que ce gars est plus grand que la avantage de F, plus ou moins. OK. Et c'est le dernier type de la preuve standard de la utilisation de convexes. Mais ici, si nous avons regardé plus près à ce stage 5, et que vous avez juste retiré ce terme parce que je ne sois pas à l'invêter, et nous avons juste regardé ce gars ici, ce que vous voyez est que c'est exactement le regret. Ouai, plus ou moins le regret. OK. Je vais dire que Delta T, en fait, F de Delta T est F T. Ici, vous avez le regret. Ici, cette quantité. OK. C'est l'avantage de la loss de la accumulative comparé à la loss de l'avantage. OK. Donc, quand vous faites une quantité de convexes, typiquement, vous dites que vous utilisez le dernier stage de votre prouve pour que vous utilisez le fait que votre fonction est convex pour montrer que votre regret est en fait plus grand que votre optimisation d'horreur. Et si vous voulez avoir des résultats pour le regret de l'optimisation online, vous devez juste faire le même prouve. Mais en fait, en le suivant, vous devez juste rester à la stage 5. Donc, l'optimisation online est que vous pouvez faire le même prouve que l'optimisation convexe d'exception que vous faites un peu plus. OK. Donc, c'est pourquoi je dis que l'optimisation est plus simple que l'optimisation convexe parce que vous avez juste besoin d'un plus de stage dans votre prouve. Et c'est le point qui est le plus important ici, ce slide n'est pas le prouve parce que personne ne peut le réagir. Je ne sais pas même pas ce que c'est, mais c'est pour vous montrer exactement que votre regret arrive ici dans votre prouve et que ce n'est pas beaucoup plus compliqué pour minimiser vos regrets que pour minimiser votre erreur d'optimisation. Et aussi, il vous montre que votre prouve n'est pas très compliqué. OK. OK, merci. Bon, maintenant, si nous sommes en train de... avant que vous ayez un très fort convex et une constraination nous allons en constraination et nous allons utiliser un 2-point ou un 2-point métallique. OK, donc à chaque stage vous allez créer x-tab minus 1 plus delta RTUT minus delta RTUT vous faites une différence et vous ajoutez un peu de bruit si vous piquez le même set de paramètres plus ou moins et vous faites une formule de régime vous avez le même type de résultats. OK, donc vous avez les mêmes résultats dans un constrain et une constraination d'optimisation d'exemple que pour une constraination d'optimisation nous devons créer deux points à chaque fonction à chaque stage pour que nous puissions quitter l'intercept. Et OK. Donc ici, je vais juste dire que je n'ai pas de temps, mais si je suis mapping un strongly convex on peut éprouvoir ce trait de régime on peut éprouvoir ce trait de régime mais ce n'est pas vraiment relevant pour maintenant. Donc, la même chose, si vous avez seulement un convex et un constrain, vous pouvez choisir d'autres sets de paramètres. Nous avons les résultats et je ne vais pas faire et si vous avez un constrain et un autre set de paramètres, juste croyez moi, et nous n'avons pas le trait de régime. Donc ici, c'était l'objectif de la parole. Donc, vous vous souvenez, les trois columns sont déjà dans, nous savons que si nous voulons faire une subvention avec le premier ordre plus ou moins, nous avons 1 sur la tue. Si nous avons un son multiplié par D, si nous avons ces soldats multiplié par D, donc nous dirons, je pense que nous ne voulons seulement diriger que si nous avons un son de 0, nous devons être les régimes optimales de la parole. Peut-être dans un problème naturel. Et ce que nous avons, c'est que nous avons ces régimes de la parole, mais sur un terme de power, qui va à 1 et la beta va à infinity, ce qui est le cas que nous sommes intéressés dans. Si nous regardons les convexes de mu, la même histoire, une tue de mu, on a d'autres tues, d'autres tues, et puis nous pensons que les régimes optimales de la parole sont d'autres tues, et nous avons un autre tue, qui n'est pas le même, mais il y a aussi une tue de mu à infinity. Et ce qui est le cas pour la optimisation de convexes, et comme je l'ai dit avant, je ne suis pas en train de parler, ce qui est aussi le cas pour la optimisation de la online, c'est juste un step 5, on ne va pas aller au step 6, on va juste presque aller au step 5. Ok? Et donc vous allez avoir les mêmes résorts pour la optimisation de la online et la optimisation de la convexe. Et je pense que je suis en temps. Donc je vous laisse ici, vous avez les objectifs et les mêmes résorts sur ce point. Merci. Donc dans la projection, oui, donc est-ce que assumez-vous une sorte de smoothness, une smoothness à infinity, une smoothness à la fin, une smoothness à la fin. Dans la conjecture, c'est une conjecture, donc il doit être une nouvelle conjecture. Je ne sais pas. Parce qu'actuellement, il y a une baie basse pour les deux nouvelles et nouvelles conjectures de la conjecture, des fonctions où il y a une route square. Parmi les deux. Non, même pour la 0, le même set-up que vous considérez, c'est une optimisation de 0, une optimisation de la conjecture de la smoothness. Vous dites, mais je pense que si je ne le sais pas, c'est un papier, c'est vrai, oui. C'est plus facile. Oui, c'est plus facile de dire, mais je suis sûr que ce type de convergence n'est pas dans ce set-up. Parce que nous nous aménagons, nous aménagons d'avoir des queries qui peuvent créer sur le set de constat. Le round-round ne s'en s'occupe même si le domaine est inconsciente. Ok, on peut prendre un point. Non? Non, mais je me souviens que ces deux sets-là ne sont pas les mêmes. Ok? Je ne me souviens pas pourquoi ils ne sont pas les mêmes, mais ils ne sont pas les mêmes. Si on le fait, il faut prendre les deux à la droite. Si on le fait, oui, mais si on le fait, on est optimaux, si on le fait, on a un bon conjecteur. Oui, je suis un peu concerné que la construction de la basse utilise une fonction qui est plus qu'un point de vue. Oui, parce que je pense que c'est une fonction de la construction. Mais on a... On a... On a... On a des désagréments. Non, non, non, parce que je me souviens qu'on a une discussion en Singapore. Mais vous vous êtes donné un tour et vous dites, ok, ce n'est pas le même. Mais je ne me souviens pas pourquoi ce n'est pas le même, mais il y a... Donc on peut vérifier l'offline, mais ce n'est pas le même. Et je pense que c'est l'idée parce que ici, si on regarde les optimisations de convection, vous pouvez obtenir ces séparations de réchauffement, ok, pour la optimisation de convection. Sans échecs. Avec des noix. Je... Ok, peut-être... Oui, oui, on va vérifier. Mais je suis sûr que pour les bandes, ce n'est pas les bandes, et je sais que c'est pourquoi vous avez picked ce... ce square. Parce que je sais que si on regarde les bandes, ce sera correct. Ok, nous ne faisons pas les bandes de optimisation. Pour les bandes, vous n'aurez besoin de square de T. Je vous le dis. Et je pense que nos bandes sont entre un autre bandes de la même manière. Donc, dans la première ligne, notre résultat est de dire si c'est plus petit que 2, vous avez le right rate et sur le right, le dernier. Donc, c'est si la beta est plus grande que 2. Oui. La black one. C'est si la beta est plus grande que 2. Donc, c'est quand vous n'avez pas un convex FR delta. Oui. Mais vous pouvez encore prouver... Oui, parce que en faisant la même prouve, en fait, si je vais juste aller à la prouve, excusez-moi, ici, F, FOR delta est toujours convex, parce que nous avons une grosse fonction de décompte. Mais ici, et vous pouvez voir, je vais juste utiliser la approximation de F par F delta, donc, il devrait être FOR delta, ici, à la fin. Mais si vous utilisez la même prouve avec F, en fait, FOR delta, F est convex. Donc, vous faites la approximation à la fin, et ensuite, vous avez un termen extra-rex qui va dans la même façon. Mais vous pouvez encore le faire. Nous ne faisons pas une grande décenturation et une grande fonction de décompte, nous faisons une grande fonction de décompte avec des noix. Vous savez, alors, si vous entendez où la magie vient de, vous vous inquiétez très loin, alors que, essentiellement, la brosse a un petit influence sur le measurement que vous faites. Vous avez l'idée? Vous avez l'impression? Non. C'est... Deltar est... En fait, je suis très sûr que si vous faites la approximation correctement et que Deltar est infini, oui, on va être dertabondé, je pense, je suis très sûr. Deltar est à 0. Ce n'est pas nécessaire si Deltar est... Il peut aller à 0, très très très lentement, plus lentement que le rate de convergence, mais c'est... C'est OK. Deltar, vous pouvez prendre Deltar pour être bondé. L'autre chose, si votre mapping est vraiment, vraiment, vraiment smooth, ce que vous avez, c'est que vous avez cette variance bias traite. Et si, si votre fonction est vraiment, vraiment, vraiment smooth, même avec Deltar bondé, vous pouvez avoir une très petite variance pour avoir un estimateur de de un gradant avec une petite bias, une très petite bias à une variance fixée. OK, vous pouvez lower le... Of course. Mais votre variance bias traite est improving quand Deltar est improving. OK, ce n'est pas très haut, mais maintenant, c'est parce que, oui, c'est vraiment votre fonction de votre, votre optimisation. Si c'est vraiment smooth, vous pouvez avoir une variance très petite. Vous pouvez lower la variance. Ou la base, c'est toujours la même chose. Donc, vous pouvez interpréter comme un mod de variance reduction? C'est drôle parce que c'est la question d'Alex. C'est la dernière fois. Donc, oui, je pense que vous pouvez utiliser la variance reduction pour improve ces algorithmes, en fait, pratiquement. Et c'est une variance reduction. Vous avez une question? On va être plus direct, que vous supposiez que le plan de la base de la base de la base. Oui, vous vous vous vous avez une bonne question. Vous pouvez faire ce que vous faites pour votre optimisation. Vous pouvez aussi cop trouver ça. Je pense avec cette assumption qu'on peut improvement ses olarièutes contre ses règles de Mother pan频 contre elles sont commeulsion ou même une espèce de fin de dimension, vous pouvez évoquer ces règles de convergence et ces techniques, effectivement. Pour moi, la motivation était un peu plus claire. Vous utilisez beaucoup de des fonctions de machine-d'amines. Mais par exemple, le premier exemple qui vient de l'aminer est la fonction de la fonction sigmale. Mais ce n'est pas différent. Et les gens utilisent des choses régulérisées, comme la conversion, qui ont une fonction de la petite support qui réglera la fonction sigmale. Mais encore, ou la fonction sigmale qui est la fonction stèle. C'est la fonction stèle, oui. Oui, oui, oui. Et c'est... le maximum de l'agrande est très high. Je veux dire, si vous essayez... il sera encore infinitly smooth, mais tous ces constants sont très finis. Et... La question est, comment vous allez essayer de... La question est, que les fonctions de la fonction sigmale ont une singularité dans un petit set compact. Il y a des résultats de Mirovsky pour l'adaptation d'agriculture. Quand ils travaillent avec ces subtiles créations, il y a des motivations pour utiliser le maximum de l'agrande ou le maximum de la grande dérive sur les fondations. Je veux dire, que... Je veux dire, que le logiciel est hyper-smooth, avec des fondations de la fonction stèle, donc le logiciel est assez de la multiplication. Oui, mais je veux dire, que... La question correcte est, si j'ai une fonction, je veux optimiser la fonction, qui n'est pas compact, peut-être que j'utilise une approximation infinitly smooth, peut-être que j'ai une approximation infinitly smooth, et puis que j'utilise ce genre d'algorithme sur une fonction infinitly smooth pour optimiser ma fonction originale. C'est ce que vous vous dis, au début, ce n'est pas... Quelles sont les fonctions que vous appliquez? Ce sont les fonctions logistiques et toutes les fonctions qui sont la norme square, typiquement. Si vous avez... C'est l'exemple principal. Je suis certain que si vous avez une fonction qui n'est pas smooth, et que vous essayez d'approxier avec une fonction très smooth, je ne suis pas sûr que c'est la meilleure idée de la fonction. Mais c'est ce que vous faites avec la fonction de smooth pour la fonction de smooth. Oui, mais mon souci est que si la meilleure question est de contrôler votre MBTI, votre délicat est增é, je ne veux pas dire typiquement, mais c'est juste un random guess. La MBTI va augmenter plus que la restante de la fonction. Mais c'est un random guess. Si vous ne savez pas la MBTI, ce n'est pas si mauvais. Nous avons utilisé le MBTI, nous avons utilisé le MBTI, mais si vous ne savez pas, vous pouvez récolter le code pour 1, et en plus d'avoir le code dans cette expression, pour une bonne puissance, nous allons juste aller au-delà avec une puissance qui n'est pas si belle. Si vous ne savez pas la MBTI, c'est ok, vous ne l'utilisez pas. Si vous ne savez pas la MBTI, c'est plus compliqué. Si vous savez la MBTI, si vous n'avez pas la MBTI, si vous n'avez pas la MBTI, c'est plus compliqué. Si vous n'avez pas la MBTI, c'est ok, si vous n'avez pas la MBTI, c'est ok. Si vous n'avez pas de idée, peut-être que vous pouvez trouver un algorithme d'adaptive. Je ne le sais pas. Et depuis que vous n'avez pas la MBTI, vous avez le code de la MBTI, et depuis que vous n'avez pas la MBTI, vous avez juste un point. Je ne suis pas sûr que vous pouvez vraiment estimer la beta 59. Je ne suis pas sûr. Je ne veux pas faire un autre pure en normes guises. Je dirais que vous ne pouvez pas estimer, mais c'est encore une pure en normes guises, mais je vais Chainman, je vais encore se trouver àізite.<|fr|><|transcribe|> pour les 26 h. Elishe. Ok. Ok. Faites un petit lien à la graine. Ok. Applaudissements. Applaudissements.
We consider online convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as online logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence on the degree of smoothness and the dimension. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for convex and strongly-convex functions in constrained or global optimization (with either one point or two points noisy evaluations of the functions). Joint work with F. Bach.
10.5446/20252 (DOI)
Okay, so well, actually thanks to the organizers for this nice invitation and thanks to the European Research Council for founding the research that I tried to summarize in this talk. So I decided to be lazy and use as a title just in the acronym of the project. So this is a project and what I will be talking about is a result of collaboration with a number of postdoc students and colleagues. This is only a sample of those who did the main work that I will talk about. So the basic idea that I would like to try to promote here is that there are some fruitful connections between single processing and machine learning and between all the activity that has been around dimension reduction and compressive sensing that could be leveraged in maybe ways that have not been completely explored in machine learning. So the idea is can we develop some kind of compressive machine learning approaches that leverage ideas from dimension reduction. So as I'm stating it with a question, there will be more questions than answers, but I will provide some case studies on namely compressive clustering approaches that leverage some random Fourier sampling ideas and on compressive Gaussian mixture model estimation. Then we'll see that there are a few things that can be leveraged, there are a few results about dimension reduction and information preservation, so kind of information theoretic guarantees. We'll see how far we can get. So my one slide view on machine learning is that we have data and we have some tasks which consist in, which more at some stage consists in inferring some parameters, the parameters of a classifier, the centuries of when you do vector quantization, lots of data and some parameters to estimate. One of the main issues being how generalizable will be the results that you get from this learn parameters when you get new data with the same distribution, underlying distribution. So here take your favorite example from unsupervised techniques of PCA clustering dictionary learning or supervised techniques where you're considering classification. So one classical way of seeing it, but I would like to emphasize it is to use a geometric viewpoint on this type of problem. So we consider the data as a cloud of points in high dimensions. And here, where the cloud of points has some structure, maybe the task is to infer this structure. But in any case, you can either represent as a point of clouds or as a collection of columns from a large matrix. And maybe it is important because in many talks that was maybe the rules of the matrix. So here I'm using columns. I know, but I know, but I can help this is called this will be columns and this will be convenient. Actually, it's so much more convenient with columns. I agree it's more convenient, but the data come on line by line in a text file. One data one. Yeah, that's that's a remark from somebody who manipulates actually data data files. I don't know. Yeah, when you store them in MATLAB, they come as columns. But anyway, okay, so that we say high dimensional. There are two sizes, two types of high dimensions, the dimension of each column. So the dimension of the features. So if you if your data if each column is an image or a signal, it can be already millions of dimensions. But more reasonably, if it's a sift descriptor, it's already a few hundred dimensions. The other large dimension is more the volume of the collection of the training collection. And so somehow the question I want to address is how much can we hope to compress this whole collection before we even start performing some learning tasks. So there are different ways of compressing one way. And again, let's try to use this geometric picture is we have let's let's think about each column of the matrix as a high dimensional vectors, it lives in a high dimensional vector space, but we can use some projections and sketching some low dimensional projection to map it to a low dimension lower dimensional vector. So we get a collection of lower dimensional vectors. In certain cases, maybe this is enough. But and this is actually something that is that has been used already in machine learning. So in the for example, in the work of Calderbank, but in most of the cases, this requires some some knowledge and model on the geometry of your point of clouds, your cloud of points. That's it has some low dimensional structure. Another issue is that, well, probably the impact is limited when really you have a large collection. If you have a large collection, reducing the dimension of each feature vector, well, has some impact, but quite limited. So there is an even stronger challenges in the era of large collections to compress the collection size itself. So we can think of of these collections as large point clouds. But there's an alternate way of trying to reduce the size of a collection, which again is related to the talk of garbage this morning. There's somehow the idea of sketching or sampling. So core sets that I know very little, but I think are related to sampling methods where you you find either clever ways of sampling in your data collections point that will be relevant to your to your task. So this maybe with leverage calls. There's also sketching more sketching based approaches. I'm thinking of the early works of the paper, code mode, etc. with histograms that can be histograms in slow dimension that can be with the limited with the finite number of beans, you can consider them as vectors and you can actually sketch them. However, when you're really in high dimension, you cannot afford to even build a histogram or you don't even know how to discretize how you would discretize space. So this calls for other approaches. That's the spirit that we'll be investigating and that we have been investigating is that of sketching with the idea that you start from a collection and you consider actually this collection as represented here as the empirical probability distribution of your data. And what you will would like to build is a sketching operator that will start from the collection and build a finite dimensional vector, the sketch. This sketch so will usually be nonlinear in the vectors themselves, but should be designed so has to somehow preserve the information content of your data relative to the task that you want to perform. So the sketch will be nonlinear in the feature of actual, but the idea is that we can build it to be linear in the probability distribution of the data. That's actually quite easy. This linearity will be what favors a number of parallels with inverse problems and sparse reconstruction. So before we go further, let's take an example. So here I'm not yet describing what's the nature of the sketch, but this is an example of compressive clustering. So here I just draw a cloud of points so that was drawn from an artificial mixture of four gaussians. I guess you see that they are four clusters. So they're not too separated, but they're not too mixed either. The blue centers are the actual centeries of each of the gaussians. And here we design an algorithm that's an approach, two steps. First step, we sketch. We compute a 60 dimensional vector. And from this vector only, so we completely forget the rest of the collection from this vector through an algorithm that's inspired by sparse reconstruction algorithms. We estimate the centeries of the gaussians. And here you can see a fairly good match between the estimated centeries and the original ones. So here you take the data matrix and you transfer it to your sketch vector. One sketch vector, not for data. Exactly. So this is an aggregated description of the whole data matrix. So let me maybe contrast with what I understood from your Garveston. In your talk, the matrix was sketched with a linear, by multiplying by a linear projection. And here what the sketch I'm going to build is not a linear projection of the data vectors. It's not a linear projection of the individual data vectors, not of the rules of the matrix either. But it's going to be linear in the probability distribution of the data. But this should be clearer in a few slides. So what are the potential impacts of these types of methods? Well, in terms of memory and computational resources, if you have an increasing collection size, typical methods for clustering like k-means or in Gaussian mixture models, expectation maximization algorithm will have a cost indicated by the pink curve that grows with the collection size. Here with these type of approaches, you have to sketch the collection. So there's something linear in the collection size, but we'll see this can be distributed. So this is the yellow curve. It's linear in the collection size, but it's not that costly. And there's a fixed cost associated to reconstruct learning from your sketch. So if the collection size is relatively small, it's not really worth it. But when you get to large collection size, essentially you have a fixed cost for learning. And in terms of, so both in terms of memory and, well, in terms of memory, it's even clearer. You have a fixed size sketch. So somehow you're forgetting the collection and you can do it online. You can compute the sketch online as we will see. So, yes? But from a statistical perspective, would you want that the size of the sketch should increase with the data? It is possible. Yeah. So for the moment, let's say to make things simpler, let's think about you have a given task and you have a fixed size sketch. Actually, it's relatively easy, for example, with doubling schemes to have to progressively increase the size of your sketch. But it's not clear in which scenario it is desirable, depending on the trade-off between accuracy and complexity that you may want to reach. So let's keep it simple. Let's think of it as a fixed size and one of the questions will be, okay, what is the order of size of sketch that is reasonable? Again, I have more questions than answers regarding this. So just to make it, maybe try to make it clear what I mean by the sketch, it's the idea is to make a parallel with this geometric picture I had before. So in single processing, we're used to thinking about the objects that we manipulate are vectors in a finite dimensional space, sample signals, sample images. And so we do low dimensional projections. Here in machine learning, somehow the statistics, the natural object to manipulate is the distribution of your data. And you would like to compute something linear in the distribution of your data, but that's mapped to a finite dimension, from which you gather some relevant information. How can you do it? Well, if you really had your distribution, you could simply choose a number of functions and you compute the expectations of these functions. And you see that the expectation of each function is linear in the underlying distribution. And you can also estimate it with finitely many samples by simply an average, an average. But you could choose those other estimators, more robust estimators of these functions if needed. So this is both, this is nonlinear in the feature vectors because the HL functions that you choose here can be nonlinear, probably should be nonlinear. But this is linear in the distribution. And somehow this, we've realized recently that this is, this can be interpreted as a finite dimensional version of the mean map embeddings. So you take a distribution and you map it to a Hilbert space here, a finite dimensional Hilbert space where you can measure distances and, and okay, work with it. Any questions at this stage? So with this idea of, okay, using a sketching trick in this type of sketching trick for machine learning, we can also try to mimic the questions that have been addressed in, in single processing around the notion. Well, the focus was on inverse problems here that would be once you've chosen a sketch, recovering a distribution from the sketch is sort of method, generalized method of moments. But probably this, you don't necessarily want to do density estimation. So the metric with which you're interested in the, the reconstruction of your density is actually might be related to the task, the learning task that you have to consider. And we'll see also that's, well, it's a generalized method of moments, but we want to do some dimension reduction. So in fact, we are, we have the possibility to choose the, the sketch. In single processing, this would be compressive sensing, you do, or this would be in computer science, it would be sketching, you, you, you design them to preserve information and probably to have some also computational, favorable computational aspects. Well, here we'd like to do the same thing and compressive learning would be about designing sketches with similar properties. Absolutely. The empirical risk is some sort of sketch except that in fact, usually it's an infinite dimensional sketch because you have to use the whole family of possible values of parameters. So if you only have to do, let's say hypothesis testing or, I mean, you, you have to compare the risk of a finite number of parameters. Sure, you get a sketch. That's a natural sketch. In fact, more generally, the, the question is somehow, can you design a finite dimensional sketch that gather enough information that you can reconstruct your empirical risk or at least find the minimizer of empirical risk? So before, so that was for the general picture I'm trying to explore here with you. Now I would like to give some highlights on some compressive learning examples, mostly heuristic. So they're done within the PhD thesis of Anthony Bourrier, Nicolas Kirimen, who is still under his PhD and in collaboration with Patrick Perez. So we've seen that the ideas would take a point light considered as an empirical probability distribution of your data. And we designed a sketching operator. Typically, the sketch takes this form. You have to choose functions IHL. And the challenge is to choose them so that they preserve the information relevant for your problem. So let us go back to this compressive clustering example that I showed. So the standard approach here would be K-means. So with K-means, you alternate your clusters. You manipulate all your data and draw Voronoi cells, et cetera. Here, what we, the way we designed the sketch was by doing some analogy with signal processing, where we know that if something is specially localized, sampling it in the Fourier domain is a good way to make sure that we don't miss anything. So think of a signal that is perfectly specially localized. If you do only a few random samples, you will probably miss the information that was present, unless you have many samples. But if you sample with Fourier, in Fourier, everything is well known to work now. So we use this analogy here. Having few clusters means, I mean, a good approach, the ideal approximation of clusters are Dirac cells. If you have K clusters, actually, they are K Diracs. And here, a good way to sample such a distribution would be to sample in the Fourier domain. So sampling in the Fourier domain simply means sampling the characteristic function of your distribution. And here, we are sampling the empirical characteristic function. So the sketches that we used. So this is related to what you said about the relationship to mean mapping. So you just kind of assume that HL belongs to a kernel or an RKHS. I mean, is this basically doing like linear sketching but on transform features? So if you transform your accesses and then this is, yeah, this is related to, there are actually, you can think of it, and this is we're still working on the relations as probably a good way to design sketches is to design a kernel mean map that's a pretty infinite dimensional and sample it. But the interplay between the two and the dimensions that you can use to sample is not completely clear for us at this stage. So here, precisely, well, here, starting from the single processing intuition that you sample in the Fourier domain, you sample the empirical characteristic function. So essentially, what you have to choose is certain frequencies, omegas. And in fact, when you look at it and you think about random Fourier features, this is very related. So this is not directly a random Fourier feature, but this is a pooled, this is the empirical average of certain random Fourier features. That's a description of the distribution of your data. So this is the sketch. And, well, behind, there's also a lot of questions on how you choose to choose the sampling frequencies. These are details I will not give in this presentation. There's still a lot of, I mean, we have heuristics that needs to be related, I think, to these kernel-minimum embeddings to have proper choices. So if you're interested in more details, there's a recent paper at ICASP that's presenting this week, actually, by Nicolas Kerriven. Okay. So in this case, we use this type of sketching. So with 60, so 30 complex valued numbers, so 60 entries, sample the characteristic function and the result, that's how do we obtain the result? We need an algorithm that starts from the sketch and performs the estimation of the centroids. For that, we need to exploit, we need some model. I mean, we have taken an arbitrary distribution and sketched it, so there's no way we can reconstruct it without a model. And the model here is a Gaussian mixture model with equal variance. So all variances are identity. So the only parameters are the weights of the Gaussians or the clusters and their centroids. So if you assume that your distribution is a mixture of Gaussians, by linearity of the sketching operator, the sketch is itself a mixture of sketched Gaussians. The good news is that when you take a Gaussian, the characteristic function has an analytic expression, so everything can be written explicitly. And you can design an algorithm that will perform the decoding, the reconstruction, inspired by that Mimix orthogonal matching pursuit and exploits certain gradient descent averaging the explicit expressions of the gradients with respect to the parameters to perform this reconstruction. So this is something, actually we leverage orthogonal matching pursuit with replacements rather than OMP, which brings significantly better results because of somehow the better ability to escape from local minima. And that's my hand waving interpretation. We have no analysis of the convergence of the guarantees for this algorithm for the moment, but we just observe that it performs really well. I'll show some more examples. So this is for compressive clustering. Now, can we extend it to more types of problems? Well, there's something. I've already used a mixture of Gaussians. Gaussians with identity co-variances. It's not difficult to extend it to compressive Gaussian mixture models. So what we've done, we've not completely relaxed taking the arbitrary Gaussian mixture model just with diagonal co-variance matrices. That's already a richer set. And with it, you can adapt the algorithm. So either reuse the algorithm we had before, but just use gradients with respect to these non-constrained Gaussians. Or, and that will be used in the next experiments, have something which gives a computationally more efficient algorithm, but not designed for clustered scenarios, more for scenarios where you have overlapping Gaussians. You don't really want to recover the centers, but you want the Gaussians to fit well. So this is using hierarchical splitting to slow, to accelerate the algorithm. Any question? So let's see an example of, well, it's a proof of concept of how this can be used in a large-scale scenario. This scenario considers the speaker verification. So are you familiar with speaker verification? So who is not familiar with speaker verification? Good. So in the word speaker verification is something where at, well, suppose you're calling your bank and you say, I'm Remy Grébonval and I would like to transfer all my money to the following account number. Probably, I mean, you're claiming an identity and speaker verification is about checking that this is the identity is the one you claim. So it's not about deciding about among a million of identities who you are, it's just checking whether you are the person you claim. This is typically done with trained models of each of the possible claimed identities. But in general, you have very little data to train for a given model. So the way you train a model is in two steps. First, you build a so-called world model with as much data as you can, representatives of a very wide diversity of speakers. And then you'll adapt this model to be a representative of each speaker. So there's a step where you type large collection and here we use the 2005 NIST database of 50 gigabytes of data. So for our scale, this is big, maybe this is small compared to the petabytes that some may be manipulating. But this is thousands of hours of speech. And here you learn a Gaussian mixture model. So two approaches will use either expectation maximization or the proposed algorithm. Now you've learned this model and there's a so-called adaptation procedure for each claimed identity. And now when Alice calls and asks to transfer some money, so here is the speech that is pronounced. It is compared both to the world model and to the claimed identity and generalized likelihood test. I mean, a likelihood test is conducted to decide whether we accept the transaction or not. So what we did was to compare this, the compressive approach with the classical EM approach in this setting. So if you look at in more details at the database that is used to learn this universal background model, this is a database of, so, first thousands of hours of speech. Now what are the individual vectors that we consider? Well, the speech is cut into small time frames and each time frame you compute MFCC coefficients in dimension 12. So you don't need to know what are MFCC coefficients. This is equivalent of 15 audio. So if you take the whole database, there are the 300 million such coefficients. In fact, if you look carefully, there are many of these coefficients that corresponds to inactive, no speech. I mean, there's silence between two words and so there are some coefficients that are not really helpful for learning and it's a standard technique to first do silence detection. But even after silence detection, you still have 60 million vectors in your training collection. In fact, when we use the state of the art EM C++ toolbox, the maximum size that it can manage is 300,000 samples that fit in memory. So we learned, so we first conducted an experiment with this number of training samples, either with the compressive approach or with expectation maximization. And here you can see the picture. So this is essentially a false positive versus false negative curve where, so the lower the better and each color, so the violet curve corresponds to EM. The rest corresponds to the proposed approach with various sketches. So you can see that, okay, if you increase the sketch size, it gets better, but doesn't really quite reach the quality of EM. Now remember that EM was limited by the collection size. We couldn't run it with more data. And our method can, yes. So here was the sketch using also those random for you features? Absolutely. And what was the sampling frequency? The sampling frequency. Okay, so the Ws, I said there's a, there's a there's a heuristic that they've looked in the papers, but essentially, so it's their isotropically distributed. And regularly there's a distribution which is, which has low density around the origin because all distribution essentially the characteristic function is always one at the origin. And even though you have classical polynomial moments that you could measure implicitly there, there's not much information. And so there's a heuristic that samples more where the gradient with respect to the parameters of your model is expected to be high. So with this sampling strategy, we can run the proposed approach with the 60 million, with the sketch computed on 60 million samples. And here what we get is that for sufficiently larger sketch size, we go below the EM curve so the results are getting better. So in details here, if you, if you, we use a small sketch, so it's a 500 samples represent the 60 million collection. And from this, this small representation, we are, we're not quite at the performance of EM, but not that far. If we want to match essentially the performance of EM, we simply take twice the number of, of the size of the sketch. And with a slightly larger size of collection, so compared to the size of the, I mean, slightly larger sketch size compared to the initial collection, you get something better than EM. So this is really a proof of concept. I'm not claiming this is comparing to state of the art. The EM is probably not no longer the state of the art in, in speaker verification. But this is to give you an idea that yes, there's a, by simply unrolling these, these ideas, these parallel, you get something that's, that is not, well, that somehow exploits the fact that you have a large training collection. And that's by really capturing empirical averages of a larger collection. Probably you capture more of the diversity of the collection. And so you can exploit it rather than, better than if you were simply sampling 300,000 samples from the collection. And well, regardless, this is where you could also wonder, should I increase the sketch size? Well, you see that's probably if you have a few samples, you can take a small collection. But if you have more samples, you, you'd like to really benefit from the more samples by starting to increase the sketch size. Okay, how much time do I have? Okay. So now there are a few things I would like to talk about after this illustration of the general ideas of sketching. The first one is, okay, let's say it's, it's, it's an interlude about how this can be implemented and potential computational efficiency. So, and then we'll dig into some of the possible theoretical connections with what is known in inverse problems and compressive sensing. So regarding computational efficiency, well, this is the expression of the sketches as we propose to compute them with essentially averaging random Fourier features. So in terms of architecture, you start from your large collection of training samples. And then there's your matrix of, of, so the rows of W here are your, the frequencies where you sample your empirical characteristic function. So you first enlarge, you have to measure more frequencies than the initial dimension of your vectors. So you first enlarge your vectors, apply a nonlinearity, which is the complex exponential, and then average to compute the sketch. So this is probably reminiscent of, okay, one layer of a network. And I believe there are, there may be more connections with, with neural networks there, but they are still to be investigated. I, there are some work, some recent work by the group of Duke on, on some information preservation guarantees for neural networks. But I think they, they consider more the information preservation in, in the sense of being able to reconstruct the initial data. And here what I, the tech home message would be rather that the information that's important is the distribution of the data that you want to preserve. At least in this scenario, this is what is considered. Now, okay, this is the architecture. When you think about it in terms of privacy, and this is related to the, the, I think that was partly mentioned. Well, once you've computed the sketch, you, you just hand out the sketch, not the rest of the data. So to some extent, there, this could help with some privacy issues. Of course, if the sketch has sufficient information to perform certain tasks, well, you will not be private with respect to this task. But if we can investigate further the information preservation guarantees with lower bounds and other bounds, we may expect that to have lower bounds that ensure that there's not enough information to perform certain tasks with, with a sketch. So besides, of course, this is, you can compute the sketch itself online. So straight, it's compatible with streaming and compatible with distributed computing. So in summary, with this, so sketches in this particular compressive GMM scenario, you start from a large collection and you compute the sketch. You achieve high-dimensional reduction. And then you are able with certain algorithms to, that are memory efficient and relatively computationally efficient to extract your information provided, provided that everything has been designed that you have guarantees of information preservation. So in this talk, I've, I've shown empirical evidence of scenarios where you seem to have information preservation and we'll see what could be roads towards proving information preservation. So this is, this is related to work we did a couple of years ago with Anthony Bourier, Mike Davis, Thomas Peleg and Patrick Perez. So with the idea that there are techniques classically used to analyze the low-ranked matrix completion, sparse recovery, general inverse problems in general that actually have some really large generality and that maybe they're worth packaging in a sufficiently universal way. So typically in, in inverse problems, you consider that you have a high-dimensional vector, you observe a low-dimensional version. So without further information, there's no way you can hope to reconstruct. But if you know that your data comes from, is well approximated by a sparse vector, then there are, there are algorithms with reconstruction guarantees. So under certain, now well-known conditions for case sparse recovery, you're able to build an algorithm that we call a decoder here. This is the terminology introduced in a, in a nice paper by Albert Cohen, Ron Devoire and Paul Kondamen. So these decoders, ideally from your measurement, you want to be able to build a decoder that has some reconstruction guarantees. Here the reconstruction is that with metric that I will leave quite fuzzy. So the decoder delta, even if you have some noise on your data, the reconstruction of your X will be controlled essentially by the size of the noise if your data satisfies your model. So if it is exactly case sparse here. In the, so this has been investigated. I mean, we know there are cases where there are such decoders, but the nice work of Cohen and Co-authors was to investigate the fundamental information theory equation is, okay, when are there, can I expect such a decoder to exist? So we know a number of decoders, and one minimization, greedy algorithms and so on. I, okay, I expect that a number of you are more or less familiar with, with this. And in terms of guarantees, one of, one possible focus is so-called uniform guarantees, worst case guarantees. They're related to the well-known restricted isometric property. So now a question. Who is, who is not familiar with the restricted isometric property? Okay, enough people that I will spend some time explaining. So these algorithms, they are designed to recover case sparse vectors. So you have case sparse vectors, you project them to low dimension. And what you would like to be sure is that there's no way you can confuse two case sparse vectors. So if you have one case sparse vector and another, and that you project them and get essentially the same representation, then you're lost. There's no way you can hope to reconstruct case sparse vectors. The restricted isometry property is essentially something that just says that if you take two case sparse vectors that are sufficiently distant in the original space, they will remain distant in the, in your observation space. And this means that considering the difference between two case sparse vectors is a vector that has two k nonzero components. This is the expression of the restricted isometry property. It preserves the, the, the norm for every two case sparse vector. So this is what is known for, in the case of sparse recovery. But actually there's a number of other models that have been considered in the literature. So related work has been done with low rank, so with sparse vectors, with vectors that are sparse in a dictionary and a number of other low dimension models pick your favorite one. So in, in our case, we would be in particular interested with these models where the low dimensionality comes from the fact that you have a mixture of few gaussians, so you have few parameters somehow. And you would be, like to consider something that doesn't really live in a, in a finite dimensional vector space, but in the space of probability distributions or measure, or finite, finite sign measures. So the question is still the same. Okay, if you're given some model, some low dimensional model and for the moment let's not specify what we mean by low dimensional model in some space and you have some measurement operator, think of the sketching operator. When can we expect to, that there exists some reconstruction algorithm? So that, that there exists a decoder with instance optimality guarantees. This, this is a question that is not new, that has been, I mean that is related to all questions from the forties in the literature on embeddings where people have been investing, getting questions such as, I have this fractal set here and I mean can I map it to finite dimension and what is the dimension I can map it to. But I will not develop this further here. We'll see that actually the existence of such decoders is again related to a generalized notion of restricted isometric property. So instead of stating one new restricted isometric property for each type of model, why not state one for any model sets. And here it is. So we consider sigma which is a some subset of your ambient space and you'll say that your measurement operator M satisfies a restricted isometric property on this model or actually on the second set of this model that it's the difference vectors between two vectors of the model. If, well this inequality holds, so here I wrote it in an asymmetric way but you can write it with the usual way with 1 minus delta, 1 plus delta in a straightforward manner. And it's possible to show that in fact if there exists a decoder with this reconstruction guarantee then necessarily the matrix M that you're considering satisfies the restricted isometric property. This is just an analysis of the worst case conditions. And the second result which is probably the most interesting thing is that in fact if the restricted isometric property holds then there is indeed a decoder with reconstruction guarantees. I come back to the decoder. You're anticipating on my next 40 slides. Just kidding. So it's just an existence result. So this is why I say it's information theoretic results rather than I mean it's not dealing with the complexity trade-offs. There are many nice questions around it. I'll try to evoke some of them. So here it implies so if your matrix or your measurement operator because you're not necessarily in finite dimension it's not necessarily a matrix satisfies a rip then there is a decoder that provides exact recovery with stability to noise. So whenever you take X which is in your model set sigma you measure it you add some noise. The decoder provides an error proportional to the size of the noise. So this is just an extension. I mean the proof just follows from rewriting that in the work of Cohen but we've just shown that it can be extended to an arbitrary model set. And something a bonus is that's probably the main difference with the early embedding results from the literature embedding is that there is also some stability to modeling error. So even if your initial signal does not exactly belong to your model set but if it is close enough with the metric that's also related to your model sets then I mean you measure its distance. It doesn't need to be close enough. You measure its distance and then there's an additive term in your inequality. So this is both stable to noise and robust to modeling error. Now of course Francis asked yes but what's the decoder? So some work done this year with Jan Trombelin will try to provide some answers but first maybe I need to just exhibit what's the decoder in the proof in the previous proof. So you know that you have a matrix with the restricted isometry. There's a decoder. What's the decoder? Well it's written here. So given your observation well you find the x that minimizes the weighted sum of your data fidelity plus your distance to the model set. You're happy to how do you manipulate this? In particular if your distance is not so... So in certain cases here the distance d sigma may look abstract. In certain cases it's essentially the one norm. So it could look okay except that you're computing the distance to k-sparse vectors so that's not nice to manipulate. So the good news is that these decoders are also noise blind. You don't need to know the noise level. This decoder will work and provide these guarantees whatever the noise level. So you don't need to tune the noise level to begin with but I agree it's not very convenient. Something slightly more convenient was proposed by Thomas Blum and Seth who proposed with Mike Davis the iterative half-shoulding algorithm for sparse reconstruction. And the projected Lund-Verber algorithm is just generalization to the arbitrary sets. So here's something connected to the talk of Alexandre Das-Premont yesterday who manipulated PROCS and there was a question by Guillaume Obosenski on yes but what if the PROCS is not so easily computable it will pop up here. So the idea of the projected Lund-Verber algorithm is simply that's... It's an iterative... I mean it's a projected gradient. You do at each step some gradient to decrease the data fidelity term that's when it's an L2 term and then you project to the closest point on your model set. So that's perfectly fine when you have case... Your model set is case-part vectors or lower-rank matrices, vector sets for which you know how to do it efficiently. But it's easy to exhibit a number of cases where it's NP-hard to compute these PROCS. Okay, so this opens a number of questions on maybe characterizing what are the model sets for which you have actually something PROCS that is computable because for these you may have dimension reduction and computability which would be nice. And in this case there's a proof of convergence. So with a well-chosen step size related to the constants in the restructed isometric property of your matrix corresponding to delta smaller than 1 fifth, you can prove that this algorithm is convergent and that it recovers stably your... No, you can prove that the iterates provide a stable recovery. So if your signal was... If your data was in your model set, you recover it exactly. I mean you converge to it. And if you're not in your model set, you may not be convergent but you will circle within a ball that's not too far from the noise level and the modeling error. Again, okay, maybe not so convenient and since there were a number of talks on convex optimization, maybe what you would rather be expecting is something of this type where the decoder that had based on the minimization of some function. So we know a number of them. So the decoder would be I try to find among all solutions within the prescribed data fidelity, the one that minimizes some regularizer f of x. So if you take sparse vectors f, f1 norm, et cetera, et cetera, well for a number of such model pairs of model sets and regularizer, many authors have proved that if the restricted isometric property is below a certain constant, then this recover, this is an instant-subtimal decoder. So in particular, there were results of Chi and Zung are known to be sharp. So maybe one of the root two cannot be improved for L1 minimization, closing an unlist story of work showing that 0.41, 0.42, 43.5 were working. But still, every time you get a new model and try a new regularizer, there's a new guarantee to be obtained. Well, that's what we investigated with the Antron Mian was the existence of an underlying principle. And so we got one. So please, please don't read this small font, it's small fonts just to be so that you don't read it. So in fact, if you have a model set that's simply a homogeneous set, so it's a cone, if you multiply by a positive scalar, you remain in your model set or that includes the case of unions of subspaces that are stable by a simple linear multiplication. And you take any regularizer, you can actually define a restricted isometric property constant. So it just depends on your model set and your regularizer. And with this, you can say that any measurement matrix that has a reap with delta smaller than this particular constant will be compatible with regularization with F on your model and the corresponding decoding will work. We'll have guarantees. So what we would like to do would be take a model set, find out automatically what's the right regularizer. So at first we thought that atomic norms would do it. The paper on atomic norms is nice, but if you look carefully at it, what it does is say, okay, here is my model set. But in fact, this model set has special points in it. And these are, I call the atoms. And they have, in fact, special properties that I don't really discuss on certain orthogonality properties between atoms. And from this, I'd be the atomic norm that will work. What are the atoms to be at the extreme points of the atomic norm? Yeah, but there's a, okay, here's an example. If you take, simply take your model set to be the set of two sparse vectors. So probably the natural way of designing atoms would be I take every, you need every normalized two sparse vector. And then the atomic norm is the so-called K support norm with K equals two. And it's, it doesn't allow recovery of points that are at the corners of the L1 ball because the atomic norm is flat. So it's a bit, I think the question of starting from the model and constructing the regularizer is still an interesting and open one. Okay, if you start from the norm, you can define a model class that would be adapted to a norm with constant. That's the, that way you can do it. I mean, starting from a norm, yeah, but, but I'm interested in the reverse way. Starting from a model, which are the vectors you want to reconstruct, how do you build your regularizer? And one possible way that we'd like to investigate is to try to design it to have to maximize this constant. But you may also have like, we have done some work with the Guillot-Boudinski showing that some sets do not have good convex class sessions. So they may, it may be possible that it is impossible to design the F, F is convex and compatible, may squash many types of other classes into, into like simpler ones. So it may be impossible. Yes, it may be impossible for the two have a convex one. Here I didn't even say that it's F is convex. Take some F, some sigma, I can define this first, but, I'll apply it more. Okay. Then at some point you need to optimize, to be able to optimize with it. But sometimes, sometimes F is not convex, but if you look at the squared loss plus F, it's convex. In certain cases, there's a work by Ivan Celestnik on that. Sometimes it's the overall problem that you pose is probably convex even though the penalty is not. Okay. So to, just to give you an example of what can be obtained with the results and I probably had minus one. I mean, now you can, so you can enroll the mechanics of the, of the previous theorem and recover existing results, gets sharp results for new, for new examples. For example, I don't know if anybody's interested in recovering permutation matrices from no dimensional projections, but you can do it provided you have RIP 2 third and that can be achieved with, I mean, you can design dimensional reduction operators that satisfy this, this RIP with a number of measurements that's, that's controlled. I don't remember, I don't have in mind the covering numbers of the set of permutation matrices, but that can be done. So since I'm a bit short, I probably skip this. It was just to illustrate the idea that we, there's, you can already use this technique to somehow design a regularizer for a model. So for certain model, you could either use the L1 norm that is done in classical papers or knowing the sparsity in different blocks, weight the L1 norm and with this weighted L1 norm, you have RIP constants that are higher, so better recovery guarantees. So highly time to conclude. So the main message I try to convey here is that there are interesting parallels to be done between single processing machine learning and with the idea that compressive sensing could maybe give rise to compressive learning techniques with the ability to reduce, substantially reduce the size of the collection while preserving the information necessary to perform the task that you want. And I illustrated it with particular sketches that are nonlinear in the data, but linear in the probability distribution. So now just the last minutes for some advertisement for some further results that I couldn't fit in the previous hour. So for the compressive clustering and compressive GMM, these are the references of the papers where they can be found. Now regarding information preservation, what I showed with this restricted isometry property is the first layer of the story for worst case analysis, guaranteeing that the information is there in your low-dimensional projection. And this story is here. Now that's the question of how much dimension reduction can you perform while actually preserving? Well, there are many works currently trying to generalize, well, essentially Johnson-Linus-Tross-Lema to general model sets. In particular, there's the work of Dirksen that establishes that the links between the Gaussian width as a measure of dimension and how much you can reduce, you can hope to, what's the dimension you can hope to project to. But it seems that it is not yet the end of the story and that's this dimension is not sharp. So there are questions about what is the right measure of dimension for a model set. And in particular, when you have a particular given learning task, how should you measure it? So some of the questions around this are related to somehow the notion of compressive statistical learning. So when what you want to reconstruct is the risk functional, for certain risk functional, we now know that you can actually characterize the intrinsic dimension of sketching that is needed to achieve it. But many questions remain open. So thank you for your patience and attention. So there are some questions or comments. You used the RIP perfectly to do the analysis. Is it easy to be satisfied? Because I read the paper of one paper of contest. It studies the metrics completion program. It says that it is not likely that the RIP will be satisfied for the metrics completion program. Absolutely. So the RIP, the Rstichelizer Metric Property is necessary and sufficient to have uniform, worst case reconstruction guarantees. But there are many analysis that do not require a worst case analysis that is somehow related to Contrastalk. There is a typical behavior. So if you do low rank matrix reconstruction, there are ways of sampling low rank matrices which satisfy the Rstichelizer Metric Property. But this is not the point-wise sampling corresponding to matrix completion. So if you want to do matrix completion where you have sampled a matrix at few points, the measurement operator will never satisfy the Rstichelizer Metric Property because you can find very simple low rank matrices that will just fit one at a non-observed location. So the Rstichelizer Metric Property is feasible but with measurements that measure essentially the whole coefficients at once. Does it answer your question? So you mean that although it does not apply to the matrix completion problem, maybe it applies to the matrix approximation problem? It applies to matrix reconstruction problem with certain types of observations but not with observations that are related to the Netflix problem or so where you have unobserved entries and unobserved entries. In case-parts vectors, it's hard to check that given linear application M, we satisfy this property in other cases, particular cases that you studied where it can be very easy to check. There's one but I'm not sure you will be interested in it. If your model set is a linear subspace, then it's a condition number that you essentially can't. But the observation was still turned out to be useful in certain scenarios where actually you just revisit it under the same umbrella. Apart from that, I think it is typically difficult. I mean there are papers now showing NP hardness of certain specified problems related to the computing testing whether the restricted isometric constant exceeds a certain threshold. Could you say something about Chebyshev's IE property? Could it also be used to simplify this analysis or to see how it is in this case? So you're talking about the restricted eigenvalue property? Okay, there I'm in a slightly less familiar context but as far as I know it's related to the descent cones of a particular cost function. So here I was talking about the ability in general to do reconstruction. Is there information in your measurements or not? The restricted eigenvalue property I think is more related to the choice of a particular regularizer and now you're looking at the behavior of the restricted isometric property on vectors that decrease this cost function. So this is, I think typically leads to non-uniform results on the non-uniform guarantees. When you have a given point you can see how you can decrease the cost function in which direction you can decrease the cost function and analyze the restricted eigenvalue property there. I have one question from a practical point of view. How would you choose the size of the sketch? What kind of principle can you say something for choosing the size? So first in the things that I didn't really take the time to evoke there are some general arguments from when you try to generalize Johnson in the Nostronnes lemma. There's the notion of the there are measures of dimension of your sets that naturally appear and if your, the size of your sketch sufficiently exceeds this measure of dimension you should satisfy the restricted isometric properties. In particular when you have k-sparse vectors the k log n over k appears naturally so of course there's a, what's the constant that's, that needs to be tuned but for Gaussian mixture models that we played with you can essentially count the number of parameters. So if you have k-Gaussians in dimension d essentially well you have k weights or k-1 weights and k centroids, k variances do the calculus and you know what's the size. We have periodically observed that we get phase transitions with, for the algorithms with this size of sketches. And if you only have data say you split the data with the first part you try to learn the minimal size of the sketch with the rest of the data you get. Okay that's a different problem I haven't paid too much attention to which is okay what's the model, I mean how to choose your model for your data. Here the point of view was rather okay I'm given a model or I choose my model and this is, I decided I will fit 64 gaussians how many was the size of the sketch I should use. The one you were evoking is to miss is much harder.
The talk will discuss recent generalizations of sparse recovery guarantees and compressive sensing to the context of machine learning. Assuming some "low-dimensional model" on the probability distribution of the data, we will see that in certain scenarios it is indeed (empirically) possible to compress a large data-collection into a reduced representation, of size driven by the complexity of the learning task, while preserving the essential information necessary to process it. Two case studies will be given: compressive clustering, and compressive Gaussian Mixture Model estimation, with an illustration on large-scale model-based speaker verification. Time allowing, some recent results on compressive spectral clustering will also be discussed.
10.5446/20250 (DOI)
Okay, so thanks. First, I want to thank the organizers to give me the opportunity to present this work. Actually, I also want to apologize to two or three people in the audience who already saw this talk exactly in the same room two months ago. I changed a few things actually because when I presented this work two months ago, there was only statisticians in the audience, so I made a lot of terrible jokes about people in optimization theory. And actually, today, I removed these jokes. Yeah, so, yeah, actually, because I have to be honest, I know nothing about optimization theory. And still, I wanted to talk about it. So, I mean, the reason why, and actually, it will explain the reason for this work, actually, the reason why is that I come from a different community. So, I come from the community of people working on aggregation of estimators. And in this community, while there are many theoretical results, there are also people using Monte Carlo methods. And when I talk about other theory or Monte Carlo methods to people, like you, I mean, people who know about optimization theory, they usually expect that I'm not able to implement anything from the methods I'm talking about. And in some way, it was true. But, I mean, the object of this talk, the purpose of this talk is to prove that actually some prediction methods in aggregation theory can be approximated using variational approximations, so using actually optimization theory. And even if I don't know a lot about it, you can imagine by yourself that actually you can use a very powerful algorithm to implement these methods, actually. So, I mean, it starts by a short introduction to aggregation theory and then by a theoretical analysis of variational approximations of, in some way, optimization theory for aggregation theory. And so, yeah, first aggregation theory, and it will be a very, very low level introduction to aggregation theory. So, you might know already a lot, but I'm sorry, I have to keep the slides at a level that I can understand myself. And it's a kind of challenge. So, I should not say that, actually, because this talk is being recorded, so some of my students can have a look at it in a few days, so maybe we'll remove the first three minutes before it's published online. Okay, yeah, so just a motivation for aggregation theory and like actually for learning theory, okay? So, you have a sample and you want to learn from it, but you don't want to write likelihood, so you don't want to do like traditional statistics, you know, examples, you've already seen these examples since 25 years on the web, so you know that you can learn something from these data sets, but you don't want to write the likelihood. And so, what you have, yeah, what you usually have to choose in order to deal with this problem, you have to choose a few ingredients that are recurrent in all the versions of supervised learning. You have to, well, you have observations first, so actually I will deal with object levels problems, so x is the object, y is the label. Actually, I will present results in the batch learning setting and in the online setting as well, but in any case, I will stick to the same notations, and then you have to define a set of predictors, linear predictors, kernel, anything you want, okay? So, I will use this notation, f-teta, it's indexed by parameter theta, which can be in finite or infinite dimensional set, theta. f-teta, of course, is meant to predict y, and then you have a criterion of success, so obviously I will define accurate notations later, depending on whether we work on the batch setting or in the online setting, but basically you can think of something like, well, in statistics, people tend to use things like that, but I will more focus on prediction related criterion, like for example, the out-of-sample prediction accuracy. So, this would be for a classification problem, obviously I will deal with a more general problems, classification, regression later, but you can keep this in mind, so we want a criterion, what is a good prediction, what is a good predictor theta, and finally, we will use in many cases an empirical proxy, an empirical approximation of r, which I will denote by small r, and for example, you can think of the empirical risk that you want to minimize, and yeah. So, basically, this is all the ingredients that we need to be able to talk about something in learning theory, and actually in aggregation bounds, so-called packed by agent bounds that I am going to present right now, you need one more ingredient, it is a way, well, actually you need that, you know that in some way you need some assumption on the set of predictors, you need to control its complexity, I mean, not even in order to do some optimization, but just in order to relate in some way this theoretical criterion of success and this thing that you can observe, and in order to relate this to this, you usually need an assumption on the set of parameters, and in aggregation bounds, usually you replace this by a prior probability distribution on the parameter space, so it will be used in some way to replace the complexity measure like that big dimension on the parameter space. And all the bonds I am going to present today, they look like that, so you will have a, a bound on the average prediction risk, and it will be up-bounded by a bound which is a balance, so you are probably used to the bias-variance trade-off, in this case you have a kind of slightly different trade-off, actually you have an infimum over all the possible aggregation distribution, and this time you have a balance between, well, this term which would be actually the best possible prediction, and in some way here you wouldn't want to take really a probability distribution, you would just like to take Dirac mass at the best possible parameter, okay? But on the other hand, you have like a variance term, and here it is actually the feedback divergence between the aggregation distribution and the prior, okay? So in some sense, it will replace the complexity measure that you have in, like for example, in that big bounds, in the sense that, well, obviously as I said before, you want in order to keep this term as small as possible, to choose rho as a distribution that is very, very spiked and concentrated around the best parameter, but on the other hand, when you concentrate rho around a single parameter, the feedback divergence with respect, for example, to a uniform prior will explode, okay? And how fast will it explode? And usually it's related to the dimension of the parameter set, theta, okay? So here you will have the usual balance between good prediction and complexity, okay? So obviously this bound is just like, I didn't say anything about, well, what is this small rho, so is this small rho of one? I mean, does it depend on the dimension on the sample size? Actually I will give precise bounds later, so just accept that. There will be additional terms, but that's what you need to understand the bound is these two terms, okay? And actually the other good news is that usually, well, depending on the bound, it will hold for a large class of aggregation measure, but actually very often the one that we want to choose is this one because some bounds are valid only actually for this probability distribution, this aggregation distribution. And you can see that it looks, I mean, if you're familiar with Bayesian statistics, it really looks like a posterior, okay? So you have the prior multiplied by something that gives weights, more weights actually to parameter that make good prediction, okay? So if you see this as a kind of pseudo likelihood, then you can interpret this as a posterior, okay? If you prefer, you can see it as a kind of smooth version of empirical risk minimization because obviously we'll give more weights to parameters with a small empirical risk, okay? It's clear? So you call this row thing the aggregation measure? Yes. It's not, you don't like the term aggregation measure? I'm not familiar with terminals, that's why. I mean, I would call aggregation measure any measure on the parameter space actually. So the thing is that depending on the papers for this, yeah, yeah, okay, sorry. I will call aggregation measure any possible row actually because I will replace this one by a more convenient one later. But depending on the paper, actually you have many different names for this one. In Bayesian statistics, people, well, they don't really like that because it's not a likelihood but still they use it and call that pseudo posterior. My PhD advisor, Olivier Cattoni, called that Gibbs measure and finally you have the exponential weight averaging because of the exponential weight that is used a lot as well. So it depends on the paper. So how do you relate this aggregation measure with the, if you go to the previous slide, to the standard setup of prediction and machine learning? Yeah, actually it will depend on the result that you use. Sometimes, I mean, when the risk is convex, oh, yeah, I mean, I talk to people doing optimization so I had to use the buzzword convex at least once. I did, thanks. When you use a convex risk, actually what you will do is simply use Jensen's inequality here and actually try to lower bomb and just lower bomb this by the risk of the aggregated estimator. So in this case, actually what you want to do is to compute a posterior expectation, okay? On the other hand, when it's not the case, when it's not convex, you can see that this is still bound on the procedure but this procedure is a randomized procedure that will, at each time that you're given a new object X, it will draw a parameter theta according to this probability distribution and then you predict, you predict Y as F theta of X, okay? And then this is on the per bond on this randomized, on this randomized procedure. So it depends, I mean, if the risk is convex, you can relate this to an aggregated parameter, otherwise you have to use a kind of randomized procedure but in both cases, you have something practical. I mean, practical if you can deal with this and this is what I'm going to talk about. Okay, so I will present you actually two more accurate versions of this bond depending on the context and the first one actually is a bond for batch learning. So you have a given sample, IID, well IID actually the independent part can be removed if you have some assumption on the dependence between the observation but just for simplicity, yeah? Can you go back once now? Yeah. There is no small r, the bond is not with small r or it is big with a big r? I mean, okay, I can provide you a nuclear wheel later, I can provide you a bond with the small r here but here in some sense it will be like a kind of empirical bond, the one that you can compute on the data and you can know that your probability of error is smaller than zero point something. Okay, here I'm more interested in this kind of inequality where actually you want to be sure that, I mean, even if you cannot compute this bond, you want to be sure that in some way you will be very close to the minimizer of this quantity, okay? But actually both are related. Your posterior is a minimizer of that one? Exactly, exactly. I will come back later when I present the proof but it seems that you already guessed how I'm going to prove the result. So, yeah. So an IID sample from a probability distribution P, well, the set of parameters I have nothing to say about it, it can be anything for the moment and a risk that can be written as the expectation of a loss function and for the sake of simplicity, even though, I mean, it's not necessary, I mean, this kind of bond can be generalized to unbounded loss functions but for the sake of simplicity, I will present one of the weakest versions of these results for a bounded actually loss function. Okay, and finally, while the empirical risk is there, obviously, defining this way and you still need a prior pi but for the moment I don't provide an explicit form. And in this case, you have the following result. So it's a pack bound, okay? It's valid with large probability and it's say that the risk of this exponentially weighted aggregation procedure, whether it's randomized or whether you have a convex loss and you use then the posterior expectation, well, it's super bounded by this term, I promised you, the balance between a good, I mean, a small value for the theoretical risk and then this complexity term and then the reminder that you have is, well, obviously, a log of 1 over epsilon as it's with large probability and then this lambda times b squared over n, okay? Where actually I remind you that b is the upper bound on the loss function. Okay, so actually this bond is due to Olivier Cattoni but actually it's based on previous work by John Chautailor and David McAllister and I just want to explain, I mean, obviously, it looks like what I promised on the previous slide but on the other hand, maybe it's not very explicit so I just wanted to, I'm going to provide many examples later in the talk but I just want to provide one example in a very simple case where actually the predictor set, so theta is a finite set and just to check what happens in this case, okay, what this term and this term look like, okay? So just assume that you have a finite set of predictors and then I'm going to do something quite simple, I choose the prior as a uniform distribution, okay? Then obviously you have this bond, I just, actually it's just the same bond as in the previous slide but if you don't want to calculate something too complicated, you can replace actually the infimum over all probability measures but the infimum over all Dirac masses, okay? And then obviously the integral of the risk in this case is just the risk of the parameter theta i and then actually you can compute the Krullback divergence between a Dirac mass and a uniform probability measure on the finite set, I think it's feasible and then well actually it's the log of the cardinality of the set and then you can see that's, well, this bond, remember that it depends on the parameter lambda, the one in the posterior or maybe I should come back here, this one here actually and in case you don't know how to choose it, now you have a way to choose, you can try to optimize the bond and you obtain that actually your aggregated estimator performs as well as the best predictor plus this term square root of log m over n, okay? So without any other assumption, well I'm sorry the optimal choice for the parameter lambda is given here, without any assumption on the loss function, actually you cannot improve on this, if you do reasonable assumptions, for example when you use least square, the least square estimation, so actually you use the quadratic loss, obviously you can improve on this, so there are refined versions of this bond, there are many actually in the Oligier book you have probably 156 different versions of this bond, some of them don't include one parameter lambda but 77 parameters lambda 1 to lambda 77, but in the end you have bounds for example for the quadratic loss without the square root here which is once again the optimal rate, okay? So it just gives you an idea of actually what you usually do, when you have a prior and a set of predictors usually you try to upper bomb this by just not taking the infimum over all probability measures but actually over a suitable set of probability distribution that you can actually deal with, I mean in the computations and you end up with a prediction bound which is usually, if you do the computation, the calculations quite well, you will usually end up with something that is not too far from being optimal, okay? I want to present the same bounds but in a different setting actually, so in online learning, so as promised I use the same notation but this time I don't have any assumption actually, so maybe the uppercase later in this case is not very well thought because the x i y are not meant to be random variables, I deal with online setting and any possible sequence x 1 y 1 can be obviously generated by any algorithm, even by an algorithm which knows actually which aggregation procedure you are going to use but anyway you have the same setting, a set of parameters and in this case I will focus on the regrets, okay? So it means that at each step I am going to use x t and the previous observations to predict y t by say y t at and then I want to compare my accumulated loss to the accumulated loss sorry of the best possible predictor, okay? So I use still the same assumption that the loss is bounded once again, something that you can remove with different assumptions but I want to present once again the simplest version of the bound so that you can compare with the previous one, okay? And then at each step actually you can still use, you can still have a kind of proxy of the quality of predictor, okay, which is the empirical risk up to time t minus 1, okay? And the prior but once again I will give a general version so no assumption on the prior. So here what I propose is to do basically the same but at each step t, okay? So at each step t I define this exponentially weighted aggregate, okay? So the prior multiplied by exponential to minus lambda times the empirical risk at time t and then I use as a predictor and in this case you can see that I use the convexity of the loss function so actually I don't use the randomized procedure but actually I could have, okay, in this case I use the aggregated predictor, okay, under this pseudo posterior distribution and then you have these results. So actually I'm not really sure who was the first person to write this result. You have a version in Chezzabianchi and Lugosi's book with a discrete parameter space but actually I found a very clear explanation about this result and many, many variants in Sebastian Gershwin of his PhD thesis but I don't know if he was the first one to write this result under this form actually. So this time the cumulative loss is smaller and you still have this balance between, well, the best possible aggregation but you have to keep once again this complexity term so the distance to, with respect to the prior, okay? So you see that in some way things will be almost the same. I mean if you specify a prior pi and a parameter set theta then you can then choose optimal, I mean you can explicit this term and this term and then you can choose an optimal parameter lambda so I mean basically it's the same for example if you use a finite parameter set so I don't want to do all the calculations again. Okay so now the remaining of this talk will be about, well, are you sure that you can compute this? Okay, so I think that is quite important. I discovered that this kind of thing can be important quite recently but even though I mean I think it's quite important. Before that I have a few remarks about these techniques to do in order not to upset anyone in the audience. First you have many, many other versions of these aggregation bonds so a very famous one is a version by Dalalio and Tibakov for, well, the difference is that it's not exactly on prediction. It's on estimation, regression estimation with fixed design so you cannot really give something about prediction risk but on the other hand it's very, very convenient because they have no boundedness assumption so it's very, very convenient. The tool is very, very convenient in many settings and there are also, in fact, they used in their paper a bound, a first, I mean a lemma by Lungen Baron but you have many, many related things in aggregation theory and statistics that are not exactly the same kind of bond but still are very related to this approach. Something else, there is obviously a relation for those among you who are statisticians and who know about Bayesian statistics. Obviously there is this link I mentioned with Bayesian statistics. If you see this as a pseudo likelihood and obviously by, I call it the prior actually meaning and it's, I call it a prior because there is this link with Bayesian statistics and there's been a few paper recently where people motivate the use of this probability distribution even when you're not doing statistical learning. Just when you're doing Bayesian statistics to use something like prior times pseudo likelihood, I mean you can think of an example just for example, this is just a short parenthesis for statisticians but for example, if you have Gaussian observation and you try to estimate the mean then here obviously you would use a Gaussian posterior so actually it would be the same but the risk here would be the quadratic loss. But in practice when you have all layers you know that the quadratic loss leads to non-robust estimation so for example here they propose to replace it by a robustified loss function and in this case you don't use here the likelihood but still you have something that looks like this decomposition prior and pseudo likelihood. So you might have other reasons, even though I believe that the one I presented before is the best but you might have other reasons to use this kind of pseudo posteriors. What do you say pseudo likelihood? So if r is a negative of likelihood? Then it's exactly likelihood. The reason why I call it pseudo likelihood is that usually when you mean likelihood you mean that you describe a parametric distribution on the observations which is not the case here. So the observations are not allowed to choose lambda? Biasons are not allowed to choose lambda and actually in all my theoretical results I choose lambda in a way that would not be acceptable by biasons. So I agree it's not exactly the same thing even so it's close but. And just one last thing that I really like to mention is that there is a community in biason statistics of people who like to analyze the rate of convergence, sorry the concentration rates of posteriors and it's quite funny because they use completely different tools but in the end they usually end up with the same computation. What I mean is that when you compute this bound here usually you have everything that is needed just from a technical perspective even though the proof is different but you have everything that is needed to use for example this kind of tools to compute the concentration of posterior rates. So sorry for the parenthesis but now I'm coming back to the main purpose of this talk. So actually the problem is we want to compute this and when I say compute what is to compute a probability distribution? Well I mean I want to sample from it or to compute the mean. Obviously if you know about computational biason statistics you know that there are methods to do this. There are Monte Carlo methods. So for example actually in the paper by Arnak and Sacha this is what they do actually they propose the Langevin Monte Carlo algorithm to compute their estimator. Well I did a work with Gérard Bue and we used the reversible jump algorithm to compute our estimator so actually there was an attempt to use Monte Carlo method for this but this is where actually the dark side and bright side in the force comes in. I mean when I present MCMC method in front of optimization theory people usually they tell me brouh dark side of the force you shouldn't use this it's too slow and I mean it's not only that it's too slow because in some cases it works well but we don't have guarantees on how far are we from this quantity. So I want to say that it's not completely true and I want to mention these two papers presenting different approaches I mean this one presents concentration inequalities for Markov chains when you start from the non-stationary probability distribution. So in some way you have a tool to prove a concentration of the empirical approximation that you get from an MCMC. On the other hand it depends on many many many assumptions on the Markov chain and to me it's not clear how these assumptions are related for example to the dimension of the problem. So I don't know if this approach is okay if you have one dimensional parameter space but I don't know how it scales with the dimension of the parameter set. Anat Dallinot has a very nice paper actually where he has the exact scaling with respect to the dimension so it's for a version of the Langevin Monte Carlo algorithm so it's very nice. I mentioned it as a preprint but maybe it's accepted since I wrote this slide so I don't know. Anyway it's a very nice paper. On the other hand you have many assumptions on the pseudo posterior in this paper so here I want to discuss another possible approach. So the idea is not to use Monte Carlo methods at all and to use optimization theory. You convinced me it's the best thing to do. So the idea is well I mean just for one slide I will use this notation which is the usual notation for the posterior and bison statistics. And the idea is just that well if this is not reachable in practice let's not try to reach it let's try to reach a simpler object. So what we are going to do is to propose not to pay attention to all the possible aggregation distribution but to pay attention just to a fixed family of aggregation distribution. For example a parametric family like all the Gaussian distribution. And then we are trying to minimize over this family the distance between what would be the true posterior or pseudo posterior so our objective and approximation. So actually this is not our idea obviously it's very famous in bison statistics. People use it since a long time so I don't know actually what was the seminal paper for this idea called variational bias approximation but I learned it in papers by Michael Jordan on application to graphical models but I think it was one of the first to use this method I'm not sure. I think a mean field was used like one subversion that we used like for a long time. Okay I don't know who plows the name. But when I say who is the first to use this I mean who is the first in statistics because if we rediscover it in our own field usually we don't pay attention to what was done before in other fields. So it's no no okay you're right probably you're right you should have been using physics much before. I like the Jordan reference. No this is why I used it. No I mean it's true I learned it in this paper myself but probably this paper provides more references to what was done before. So in the end the thing is that okay you use either a non-parametric or parametric family so by this I mean finite or infinite dimensional set F so the mean field approximation would be the case of usually an infinite dimensional set F but here we focus on a parametric approximation so you just give a set of probability distribution row that is indexed by a parameter A in a finite dimensional space and then in this case you completely replace your posterior sampling problem or your posterior mean problem by just an optimization problem. So actually it's done I mean in some sense I'm going to present now applications. I'm going to try to obviously to justify this approach and to prove using the previous theory that actually in some cases you don't lose a lot in your prediction ability but I mean don't expect me to provide for example optimal approximation optimization algorithm here. Okay it's not my job I'm trying to learn it thanks to your talks yesterday and today but I'm not the one who can give the best in any case the best possible algorithm here. I don't want to show you that actually even if you use aggregation theory the problem can boils down to an optimization problem which you can solve. So our first question was do we have any theoretical guarantees on the approximation? So I will present a result but just before a few explanations so this is what we target, this is what I want to compute so I want to minimize the distance in terms of feedback divergence between the approximation and this pseudo likelihood and the thing is that and this is exactly what you said before Francis when you write it it boils down to the aggregated version of the empirical risk this time plus the feedback divergence and you have a reminder term but actually it does not depend on A so I can forget about it when I'm doing minimization. So actually what I'm going to do is to first to minimize this with respect to A and then define A as my estimator and finally my aggregation distribution if it's necessary to compute it then it's actually rho where you take the parameter alpha A sorry. So I'm a bit confused where you want to approximate this one instead of just the bound you have actually used to derive this one. Actually it's the same. Okay so it doesn't change. You will see that actually the bound that I had sorry where is it? The bound that and this is actually why you had the idea to do this and to look at the rational algorithm. The thing that is the bound well it's too far away in the first slide but the bound that we had on the aggregation procedure actually its proof is based on the fact that we minimize this bound with respect to all the probability distributions while actually if you just minimize over a parametric state it still gives you a theoretical guarantee on what you get in the end. So you're saying that the KL between distribution was the right metric of distance to minimize the bound that's what you're saying? The main point is you had a bound which was actually giving you the actual performance you cared about which had nothing to do with KL in general it's just like the risk and so you just want to find a new approximate aggregation measure which has good risk. So why minimizing the KL? Okay so I have two answers for this. First some people did it before and obviously if you try to replace the KL by another metric on probability distributions things might be harder from a computational point of view I mean the good point with KL in that in the end you can compute it explicitly. So I mean you can have you know what you want to minimize I mean if you replace it for example by the total variation measure I'm not sure that you will be able to say anything about how to minimize this. So the reason why we were interested in this is that actually sorry this the minimizer of this quantity it plugs in very easily into the analysis that I presented before okay. So these pack-basin bounds that were already known where you relate the prediction risk actually to this balance between aggregation and distance I mean callback libelor the distance with respect to the prior then actually you can plug this into this analysis and gets a theoretical guarantee on your approximation. You see what I mean or no? This is your bond is essentially that one with the R replaced by capital R. Exactly you can get you just guess the next slide so I'm going to show the next slide now. So I guess I had your paper so I'm not guessing. No no no my explanation was so good that you guessed I know. Okay I mean so in some case I mean this results I mean I wanted to cite the paper obviously but I shouldn't be proud of it in some sense because it's just weakening a result that was existing before but what I'm in some way more proud of is what will follow the fact that actually it leads to a practical procedure for aggregation okay. So this paper that also okay I forgot to mention my co-authors at the beginning of the talk so now it's time it's the paper that we wrote with Nicolas Chopin and James Ridgway. James was PhD student of Nicolas at NCI but now he's doing postdoc in Bristol okay and it tells you that actually if you use this fashion approximation so raw tile it's not the raw hats that I had before sorry about the mess but actually raw tile is this one the one where you plug the minimizer of this criterion okay and the approximation that you have then instead of having the minimized sorry the best balance okay for between risk and KL for all the probability measure then you have it for only in the family A okay only in the parametric approximation. Obviously you can do it for non-parametric approximation like mean field actually we did it in the paper but as this is only a three-hour story I just wanted to present the short version okay. So the thing is that obviously the work is not done then I mean you have a bond it looks in an abstract form the question is I mean do we have and actually obviously if you take here a very poor family A it's possible that this bond is very large okay so the thing is now it does not give you a way to prove that VB variational base approximation always works it provides you a way to checking when it works I mean you have this bond you compute this bond in your model if you get something that is good then it means that to use VB you will not use lose anything in terms of accuracy when you use it for prediction on the other hand if you compute this bond and you find something large it just tells you nothing okay so it's just a tool to try to make sure that the VB approximation makes sense. So if you have a variational inference if it works you're happy if it doesn't you don't know can you consider it? Yeah yeah yeah yeah but some way it kind of give you a theoretical guarantees at least that at least in some setting it should work okay. So I prepared the proof of these results so I mean I'm not sure maybe I should first present the applications and then go back to the proof if I have time later so okay so I want to apply this to a linear classification problem okay so in the back setting actually so I come back to this to this setting okay so you have a sample IID from P sorry then this time classifiers are linear classifiers okay so you just compute the scalar product between the object and a parameter and then you check on what side of the hyperplane is your object. The risk is actually then in this case the classification risk the probability to make a prediction error and I want to approximate it by the empirical risk so as thanks to you I learned a lot about optimization theory I can try to minimize this so I just have to compute the gradient of this quantity and set it to zero I think it's always work in order to make approximation and to make optimization but I mean in this case I was not able to compute a sensible gradient so actually I decided to use our approach with aggregation so I use a Gaussian prior just because it was simple for calculations actually I will explain what can be changed if you replace this by another prior if I have time but for the moment just stick to a Gaussian prior and then actually so what is going to be the posterior I mean the posterior is the prior multiplied by e to the minus this okay so it's not very nice and the idea was just to approximate it by a Gaussian distribution okay which must be quite easier and in some sense then you just have to optimize with respect to mu and with respect to sigma and you hope that in the end you will get something sensible so first what is the optimization criterion in this case we wanted to write that down explicitly and in the end what you obtain is this so actually phi is the CDF of the Gaussian distribution and you obtain something that obviously well actually this looks like a slightly modified version of the empirical risk a smooth version of the empirical risk it's due to the integration with respect to the posterior to the aggregation distribution then you have the mu squared term so it's like a rich penalty and then you have also this penalty depending on the sorry on the covariance matrix okay so the problem is that well in some way it looks better than this one in the sense that this is a smooth minimization program well this one was not smooth on the other hand still is still not very good in the sense that it is not convex and I heard in the talk yesterday that you have to use the word convex a lot when you talk about optimizations this is another thing about optimization that I know so it's not so good even though we tried to optimize it I mean in small dimension you can still do it using for example gradient descent but at different scales so using deterministic annealing and yeah sorry first I should show you the results maybe before the theoretical analysis but okay so what we did here we took seven data sets on the machine learning repository we used sequential Monte Carlo which I mean it's a Monte Carlo method that works well at least when the dimension is not too large you know that it will give you a kind of benchmark result we compared our variational approach in this case and then just as how can I say we used a non-linear SVM to compare to line are methods just in I mean I didn't know why we did that exactly but the reason what was that we wanted to use another method and check for the data set whether I mean it's sensible in some way to analyze it using a linear method or I mean for example in this case you see that the two linear methods they do well they don't perform so well and in this case it's clear that you should use a non-linear method on the other hand in this data set well it seems that to use a non-parametric method as like SVM does not bring you a lot when compared to line are methods so in this case linear classification is sensible and you see that in many examples actually not in all the examples not here for example but in many examples we have actually a slightly better performance which is due to the fact yes. What is the SMC? Sequential Monte Carlo. Yeah, on which model? Oh yeah on the same model yeah sorry so actually it's the approximation of the usual pseudo posterior using this okay so actually we just tried to did what we would have done before we learned about optimization exactly it's not MCMC actually SMC it's a variant because actually it's this algorithm where you generate a point from from the prior and then you just eliminate those that have a poor posterior but the ones that have a large posterior you duplicate them and then apply to each of them exactly exactly exactly exactly it's yeah so the two columns are the same the same the same estimator computed in two different ways and the one that I'm trying to sell today is this one and this is the old fashion. I'm not saying that it is superior to you. You just said that like one minute ago. Okay I said that I'm trying to sell VB today so obviously I mean I chose the seven datasets that prove that I'm right and I just add this one because I know that you I mean in simulation study you don't have to be right 100% of the time. Why do you say your aggregation methods are linear if you're aggregating different linear classifier anyway that's nonlinear no? Yes you're right that's nonlinear in the end but it uses a linear set of predictors. Yeah but in the end it's a nonlinear predictor. You're right okay but even though okay my objective in the end and using this bond that I will present later I mean I might be much much better than the best linear classifier but actually my theoretical analysis just says that I do as well as the best linear classifier I will come back to this later so actually in some way this was my objective my objective when defining a set of linear predictors was to do as well as the best of them. It's true that when you aggregate you can do much better than this but even though this is not what we try to do in this analysis okay so and actually in this case obviously I mean this improvement might be due to the clear nonlinearity of this but I mean for example in this case where clearly it seems that there is a very good linear classifier you still have an improvement and which was you from what we observed actually to the fact that well obviously we stopped SMC after some time and you're not sure that it converged while actually we stopped the gradient algorithm when the gradient was exactly equal to zero and so we were sure that it converged. Exactly equal? Yeah. Tulien, up to 0.5 decimals. Yes but didn't you say it was not convex? Yeah no no you're right I mean it's not convex and I will discuss it promise I mean I pointed this out exactly it's not convex so I pointed this out we don't have a guarantees on the minimizers but I will come back to this later I have another born later actually so but first okay before this so we have these results that seems well at least promising even though we have to say more yeah. So this is the final performance could you plot as well the bound that you obtained because to me packed Bayesian is used also to get like over ball confidence bound or the predictions. Did you plot them? Did you plot them? You see there were like many full or? No I mean sorry I don't have the plots here anyway but I mean you're right and once again I will come back to this later when I will discuss the proof just one thing now actually if we use the theorem that I presented before so what sorry where is it? Here this one and we will place this set A by what I said okay so set of linear predictors then we obtain here this result okay this result. So what does it tell you? Well there is an assumption I will come back to this in one minute but first it tells you that the risk of your aggregated metal and actually you see that here I use the randomized version okay so you're right I mean it can be if you're lucky it can be better than line AR but anyway what I know at least for sure is that I do as well as the best possible linear predictor plus this rate square root of D over N which is actually once again classification the best you can do well there is an additional log terms which can be removed at the cost of very very exhausting analysis but in some cases it can be removed actually using pack based on bonds but in the simpler version you have this log N term okay and yes I just want to mention this so you have a kind of regularity assumption in some sense which tells you that when you slightly change a theta the the real risk doesn't jump a lot which is obviously sensible in the sense that you replace a point estimator by an aggregated estimator that takes neighborhood and obviously it cannot work I mean if you use a Gaussian aggregation it cannot work if you don't have just a kind of regularity so you have this assumption and if you don't have this assumption we are not sure actually about what happens okay I want to mention that this assumption is not necessary for the pack based on analysis of the empirical rate aggregate if you're not doing VB in this case we have this assumption but under this assumption we know that the method will work and actually how do I prove this result and this is the opportunity actually to show you another application of the trade off between risk and KL term okay so actually what we do is that we just apply the theorem so the risk of the aggregated predictor is actually as good as the infimum over all the possible Gaussian distributions and actually for the sake of simplicity I use just the ones with the ones with a diagonal covariance matrix but actually just I mean you can change it it will improve on the constants but this may close mercy much much simpler okay and the idea I mean I don't derive all the calculations but first you know the feedback divergence between two Gossians so in this case it's this term and you see that here you have the M that is the dimension so sorry here it's M and in the previous slide it was D but it's the same so you have this term here and then the other term actually this is where this kind of smoothness comes into into account so actually the aggregated risk is as almost as good as the risk of the mean of the distribution plus some reminder okay and you just optimize with respect to everything with respect to this S squared here and then with respect to lambda and you get in the end the optimal bound okay so this is how it works okay I still have 15 minutes the bound okay actually so in most of the papers that I know when people wanted to use here the to compute the bound for the full Gibbs posterior they use here the infimum and the first step of the bond was to replace the infimum by over all probability distribution by the reasonable parametric set anyway so I mean the bond is the same exactly so this is where actually we had this idea I mean it was frustrating in some way to use VB approximation in the bound but not for the estimator and actually obviously you can do it so this is what people I mean in the beginning they were not choosing actually Gaussian but rather something like uniform distribution on the ball around the best parameter which actually is a good idea because it allows you to use a pack by is and bound to prove bonds on the empirical risk minimizer as well but usually I mean as long as you have as it's possible to change the mean to take whatever mean you want and to change the scale of the distribution you can basically get what you want okay and yeah sorry so as I told you before it works well but the problem is that obviously it's not convex and so as it's not convex you're not convinced it's one of the jokes I prepared yesterday sorry okay so obviously you notice now you're doing classification you can you want to replace the zero one loss by your convex or a gate so for example you can use this paper by Tong Zhang which you have many many terms about all the possible replacements for the zero one loss and what you lose in terms of rate of convergence so for example here we wanted to use the hinge loss okay so the one that is used for support vector machines okay so we define now our risk in this way and we have this empirical risk and we still use a Gaussian approximation for which this time and this is important for the analysis actually we this time we fix the Gaussian approximation with just a fixed variance okay I mean all the coordinates of the same variance and the variance the covariance matrix is a diagonal one in this case we did the calculations and I mean the calculations are not very difficult but what is not maybe obvious when you see it is that this is a convex criterion okay so you have still the CDF of the Gaussian distribution here you have the density function of the distribution of this distribution so here in the end you have two terms that still looks like a kind of actually it's just a convex surrogate it's a new convex surrogate and Sylvain pointed this to me last time but this is just a new convex surrogate of the zero one loss function but this is a new one for which you have another guarantee okay so this will be the empirical risk you still have this kind of fridge term here okay so the penalty that is the square of the parameter mu which is due actually to the Gaussian prior and then the penalty on the variance and well it's not obvious I mean for example we tried for fun to replace here the zero one loss by other convex if I lost function and then when you integrate with respect to the posterior you don't necessarily get a convex criterion okay and actually what can be I mean usually it's convex with respect to mu but what can be painful is the parameter sigma okay so in this case we are lucky it's convex convex and sigma yeah can you check that just with a second derivative we did the computation and it works okay you have the you read the paper okay I know it's not obvious and once again it's not something like it's not because you take a convex loss and then you integrate that you you will obtain a convex function but then in this case it works okay and then in this case so this is where I told you that I don't I mean I took many precautions about that I'm not a specialist of optimization theory well in this case there's not a lot that you can do because it convex but it's not really more than convex you don't have many many good properties for example you can if you make some some assumptions on sigma like for example if you prevent sigma from going too close to zero then you can make better things but in this case we just like took a ball on the parameter set mu and sigma and then use the gradient algorithm but what we were proud of is that even though we don't know a lot about optimization theory we were able to write this theorem which tells you basically under the same assumption as the previous one that you have I mean at each step okay you compute at each step of your gradient algorithm you compute an aggregated distribution so you have a mu at k and a sigma at k and then the risk according to this procedure to also the risk according to what comes out of the computer and not what comes out of the paper is actually as good as the best possible risk for a linear classifier plus the minimax bound for the classification problems square root of the over n and plus something that depends well probably badly for people who are good in optimization theory but still something that depends explicitly on the number of steps that you have okay so obviously I mean what I wanted to present to you and this is why I went into the details more than into a good bound in the end is that obviously you can play with this you can change the hypothesis you can change the parameter class the sorry the predictor class you can change the prior and then in the end you can change the algorithm and I'm sure that you will be able to improve on this bond but still now you see that using pack Bayesian bounds it's possible to provide guarantees on the prediction that you have for a variational approximations and it's possible to reduce the problem to a convex optimization problem in some cases okay I just wanted to advertise for something that I mean I feel free to advertise for it because it's not mine actually just after we submitted this paper James we dress one of my quarters decided to write a package which after very very long peer review process is now available on the our website and well I mean I don't want to I mean it works like very basically so I don't want to enter the details but it's just that I mean you just enter obviously the matrix of labels and the matrix of objects but the thing what I wanted to show is that even so we I didn't make the plot actually you can get the bound okay so for example in this case you know that the probability with probability at least 99% the probability of error is smaller than 0.79 which is not so good this is a bound on the in this case yes yes yes yes this is the bond on the right this is an empirical version on the bond on the hinge loss even though yeah yeah okay you're right even though it's not very very good I mean we know that these bonds are pessimistic and especially when you use it in a problem where you clearly have a linear classifier then the square root d over n rate is not optimal you can replace it by d over n when you have a margin condition or something so this bond is usually not so good so good but I mean we are happy with the fact that to minimize this bound provide a good estimator okay you can play I mean in some points it's possible in the end to get something smaller than one half you have to wait for a long time but it's possible do I have time for the proofs maybe five minutes yeah okay so I come back to the main result I mean the proof is quite obvious but I want to I mean it will be the opportunity to mention this empirical bound which unfortunately I did not plot but okay so sorry I want to prove this theorem so the fact that your approximation your VB approximation performs as well as the best possible approximation in the parametric family A okay and then we start with a Dings inequality okay so you just have a bound on the exponential on the exponential moment capital R minus small r and then well I just rewrote it introducing my probability epsilon okay and you integrate it so this is the change I mean with respect to the next type bound is that you integrate with respect to the prior which does not change anything for epsilon okay because it's a constant and then so this is very standard but then you get actually to this point okay and then you use this lemma I mean the fact that you cannot that you can compute the convex conjugate about of the KL divergence and that's moreover what I will use that later you know the actually you know the distribution rule that reaches the minimum here okay and the supremum sorry here so actually when you use this lemma you get this kind of uniform version uniformized version of Dings inequality but here again as I integrated with respect to the prior the difference with what you have in that mix type analysis is that in that big analysis you have a soup with respect to the parameters theta well here you have a soup with respect to all the probability measures on the on the parameter on the parameter set theta okay and then finally you use Markov's inequality and it gives you this empirical bond and I wanted to insist on this because actually Francis mentioned it many many times but here you know that it's not only for the minimizer it's for all possible probability distribution rule all possible aggregation distribution then the risk of the aggregated procedure is smaller than something that is completely empirical I mean it depends only on something that you know the risk the empirical risk so it depends on the sample and then the parameter lambda that you choose and then the koolback divergence with respect to the prior that you choose as well so actually you can compute this bond numerically and this is the bond that is provided as well by the package okay so actually I mentioned in the beginning of the talk the origins of pack-based bonds like Macalester work he focused mainly on this kind of bond because he thinks that's what's important in here to be sure that your classifier in with probability 99% has a mistake that is smaller than 0.1 okay on the other hand even though if you want to prove something a theoretical bond something that depends on the true prediction risk you have to to to derive this as a tool anyway so this is quite important yes are you allowed to optimize about lambda no you're not allowed to optimize on lambda in this bond okay on the other hand I mean you're allowed to in some way you're allowed to optimize on lambda but for a lambda that does not depend on the sample okay on the other hand what you can do is to use the union bond obviously for lambda on the grid and then actually using the fact that this is increasing and this is decreasing with respect to lambda you can even you know optimize in an interval and this is what people do actually well I mean usually they provide a more sophisticated version of this bond where you are the infimum with respect to lambda even though it's not the the thing that works in practice I mean in practice if you want to choose lambda it's the the main problem that we have to solve yet because the minimization even if you would minimize this bond with respect to lambda usually you don't have the best possible lambda and cross validation for example for much better but obviously it's much much expensive okay so you have these empirical bounds and actually the thing is that the reason why people use this exponentially weighted aggregate that I introduced in the beginning is that actually this is the one that minimizes the right hand side okay and as it minimizes the right hand side you have here an infimum and then what you do well first till you you have these empirical bond okay that I mentioned before but as you minimize the right hand side what you can do is then to use the reverse bond okay when you replace you start to gain all the process but you replace r minus small r by the opposite of this and then actually what it tells you is that you have this time something which seems useless in practice okay the the integral of the empirical risk is smaller than the integral of the true risk plus something but you can plug it in the previous analysis okay so what we had before the integral of the sorry of the minimizer is smaller here than the empirical bond but the empirical bond if you take the minimizer it's an infimum and then using this result you can replace the empirical risk by the true risk in the in the infimum okay and so here you have the the terrain so this is quite a stand-up analysis actually but the the only point it was to remark that actually here I mean if you don't minimize with respect to all the probability distribution but just over your approximation family then it still works okay well it's time to conclude so I have just one slide called conclusion I think yes so there are other things to do and actually some of them are already done in the paper we always we also provide for example complete analysis of ranking models so it's very similar obviously to classification because we still use linear score functions but we also have the same thing actually that we are able to prove to replace MCMC methods by optimization method and even in some cases by convex optimization problems we have also have a sketch of the analysis in matrix factorization I mean a sketch of the analysis in the sense that there are many problems there in the sense that yeah if I have just one more minute I will come back to this sorry this result sorry yeah this result tells you that the variational base approximation performs as well as in some sense as the exponentially weighted aggregate that you wanted in the beginning if this term is not too large so actually what we did was to to prove in this case that this bound whether you compute it for the exponentially weighted aggregate or for the variational approximation that is used in practice by people by Bayesian statistician when they do matrix factorization the bond is the same okay so it means that in some way if the packed Bayesian bound would hold for the exponentially weighted aggregate then it would hold for the variational approximation as well on the other hand up to my knowledge until now nobody was able to prove that it holds for the exponentially weighted aggregate so there is a missing part in this problem and obviously there is the other missing part that is that the criterion used anyway for matrix factorization are non-convex and so we don't know whether we we converge actually to a proper minimum or even to a to minimum at all but even though I mean it seems that there is something to be done there even though we are not able to complete the analysis okay so the theory is not complete I wrote it we have other works in progress actually so James is currently I mean in the package for the moment we only have one version of the gradient descent which is not actually necessarily the most efficient one so actually James is currently writing other functions to use other optimization methods and actually what I'm interested in currently and this will be my last slide my last slide it's a question for you can you help me on this you remember that I presented in the beginning two pack Bayesian bounds one for the batch setting and the other one for the online setting and the one for the batch setting actually it's it holds for the exponentially weighted aggregate but it also holds for the variational approximation I said nothing about optimization I mean about the online bound and the reason why is that we were not able to perform the same analysis so I can show you what we have remember that in this case at each step t you want to compute the mean according to this probability distribution and what we could do I mean is to use a vb approximation so you perform optimization it will be already more costful actually than a proper online gradient algorithm because here at each step you would have to perform an online optimization but even though we're not really sure that this one works okay if you use this okay so the mean according to the the the mean prediction according to the pseudo the approximated posterior sorry rather than the true one then actually we have this bound okay so we have the cumulative loss that is smaller than the critter the same criterion as the one that you would have for ewa but on the class but in this same in this case you have an approximation term but instead of having it only once uh then actually you have it's at each step of the algorithm okay so it means that the cost is much much higher so maybe actually I mean obviously this is what comes out of the standard proof for online for online ewa maybe there's a better way to analyze it but until now okay we are not able to to improve on the fact that we don't pay the the price for the approximation once we pay it at each step of the algorithm and so I mean if there is a non-zero distance between in some way the the the true ewa and your approximation family then you might pay a huge price here so we're not able to generalize this analysis for in online prediction but we would like to so if you have some ideas and if you want to write a paper for me then you're more than welcome okay so I mean I can start to work on the jokes for the talk that we will give about that later okay so thank you for your attention and okay thank you yeah is there any question yes um I was wondering about the connection there I mean you didn't give a lot of citations and you didn't cite that word so maybe you're aware of it but so I was wondering the connection with the work that Tony G. Varadid on maximum entropy what he calls maximum entropy discrimination so he considered something which is very similar to what you're doing with the hinge loss essentially you know a formulation where he like you compute an unexpected value of the empirical risk and you you have if you replace the regularization term by a cobalt library divergence between distribution on the parameters I put how you call them and a certain prior distribution so it seems very related to this do you know about this work and like no no but I'm very interested by the reference so obviously I will read this paper this afternoon tonight after the talks yeah no really I mean I'm interested in it but I didn't know it I think it dates back from 1999 so I think the connection with the connection with pike base is not as you know elegant is what you're presenting but I think it's okay thanks John Chantazer and Langford proposed already a vibration bound for linear classifier I would like to know if you know this work and yes can you comment on the relation between once again I mean I think that we have different objectives because usually they are more interested in the in the empirical version of the bound so something like this when you relate I mean you provide an explicit bound for the for the but I mean it's true I mean in some way pack based on bounds once again you have to minimize with respect to an infinite dimensional object and people already had the idea to to to minimize just over a smaller set of parameters okay so it's just I mean I really the thing was to make a connection with VB but yeah but I think the algorithm is very visible to some algorithm already existing in in population literature especially the first one proposed with the the the Gaussian posterior the unique algorithm posterior yeah once again I mean I don't think that any of the algorithm here are new I mean thing was just to provide analysis of existing algorithm but you you're right I mean most of the algorithm already existed before anyway yeah we know about this work by John Schottelor yeah so did you run the SGD version on the Inglis version on the experiments like you had three columns where VB was doing better than yes actually James did it and this is the version that is implemented in the package yeah yeah so actually oh yeah yeah sorry I removed this slide actually because yeah okay I thought I had it somewhere yeah you're right actually sorry where is it I'm lost yeah we had this and we have another one in the paper with the Inglis version so actually usually I mean in most cases I would say in a country where there may be five or six cases it improves on VB in the sense that it converges much much much faster but there is one case where we have a very very surprising like accident I don't know why we did not understand it but in much cases it's much it works much much much better than this one so that's much better and there's the difference between speed and the test error so exactly but I mean in some way the idea was to I mean this is a test error okay it works much much better than in the sense that if we run the algorithm for the same time then in the end we have a better test error okay in some sense the idea of this simulation study was that we don't want to split between the optimization and the test error but that we wanted in the end a good test error whatever you have in the sense that you can have a complicated model with a poor optimization procedure or a simple model with a good optimization procedure in the end you want to compete in the test error level obviously so I mean the improvement that we have is in the test error in the end but obviously it's due to a better optimization but to be clear this column of VB there was with the original multivariate Gaussian approximation yes and then you get a local minimum by say they agree because yes and you're saying if you compare that with the other one which is now using that multivariate but the univariate well I guess identity covariance matrix but it's the hinge loss it's looking at yes now it's convex yes now you're saying between those two they gave the same test error but one much faster they gave this actually even a slightly better test error in some cases I mean I don't say I don't know I but actually you're right I mean it might be that the accident that we have I don't I can't remember for which probably it's for this one I think which seems very easy in some way and then actually we have an accident in the sense that is the worst method and it might be due to the fact that actually we have to restrict our attention to diagonal variance matrix I don't know but yeah in this case we don't have a good prediction error but yeah yeah what I wrote there is the prediction error it's the test error on the half of the sample yeah okay another question you mentioned that integrating a convex convex function is not necessary convex but how about integrating a strongly convex function I don't know is there something obvious or or exactly I think that's the problem because when you integrate a convex instead of the zero one loss you integrate a convex loss function with respect to t-tides okay but the problem is that the criterion also depends on the cool back divergence and the cool back divergence I mean you're already lucky if it depends in the convex way in sigma you see what I mean I mean the criterion sorry I should when you minimize this okay if r is convex you integrate with respect to a probability distribution I mean this should be convex as well but the question is this part convex and with respect to sigma it's not always the case so even the integration of the loss where should be convex in the balance parameter now you're right
PAC-Bayesian bounds are useful tools to control the prediction risk of aggregated estimators. When dealing with the exponentially weightedaggregate (EWA), these bounds lead in some settings to the proof that the predictions are minimax-optimal. EWA is usually computed through Monte Carlo methods. However, in many practical applications, the computational cost of Monte Carlo methods is prohibitive. It is thus tempting to replace these by (faster) optimization algorithms that aim at approximating EWA: we will refer to these methods as variational Bayes (VB) methods. In this talk I will show, thanks to a PAC-Bayesian theorem, that VB approximations are well founded, in the sense that the loss incurred in terms of prevision risk is negligible in some classical settings such as linear classification, ranking... These approximations are implemented in the R package pac-vb (written by James Ridgway) that I will briefly introduce. I will especially insist on the the proof of the PAC-Bayesian theorem in order to explain how this result can be extended to other settings. Joint work with James Ridgway (Bristol) and Nicolas Chopin (ENSAE).
10.5446/20248 (DOI)
Thank you. So first I'd like to thank the organizer for inviting me to this great workshop. And this talk will be about a statistical model called the multi-arm bandit model in which an agent interacts with a set of probability distribution called ARM by sequentially sampling from them. This model is often used within a reinforcement learning framework in the sense as the sample collected are viewed as rewards that the agents want to maximize or equivalently we say that you want to minimize a quantity called regrets. And while this regret minimization problem is can be considered as solved because of some lower bound and algorithm matching these lower bound, I will talk today about a much less understood problem that of best-arm in notification in which the goal is to identify as quickly and accurately as possible the ARM with highest mean within the model. And in this joint work with Aurélien Garivier will present a new lower bounds for these problems together with an algorithm that asymptotically matches our lower bounds and that is also very efficient in practice. So first I will start by reminding you or by explaining you what the multi-arm bandit model is. So it is simply a collection of k probability distributions that we called ARM and then an agent sequentially interact with this ARM by choosing at time t an ARM 80 that he wants to draw and then you observe a sample from the associated probability distribution. So of course his sampling strategy is going to be sequential in the sense that the ARM chosen at time t plus 1 must depend only in some arbitrary way of the past chosen ARM a1 to 80 and the past observed sample x1 up to xt. And the way sample the ARM will be directed towards a goal related to learning which arms are the best in the model and our criterion for best will be the ARM with highest mean especially we want to identify the ARM A star that maximizes the mean and we will denote by a mu star the mean of this ARM. And this learning process can have several constraints and actually the first constraint that has been considered in the literature is the one called regret minimization in which the sample collected are viewed as some rewards and in which the goal is to adjust the sampling strategy so that one maximizes the expected sum of the rewards accumulated during the interaction. So this is equivalent to minimize the quantities that I define here as the regrets which is the expected difference between so the accumulated rewards one could obtain if we played always the ARM mu star that we of course don't know during the the real situation minus the sum of accumulated the reward obtained with our actual strategy. So minimizing the regret forces to realize a trade-off between exploring the environment trying a little bit all the ARM to get an estimate of the mean payoff and trying to play the ARM that have been best so far because we have this constraint of maximizing the rewards while we learn. So originally this model arose from a simple modeling of clinical trials so this dates back to the 1930s with the work of Thompson and for example we can have we have a bunch of medical treatments so each is associated with a Bernoulli random variable that models the variability of the treatment across patients and that gives one if the treatment was successful or zero if the patient dies. And the goal of course would be in a medical trial to maximize the number of patients still alive at the end of the trial. But actually if you so this would be a rephrasing of maximizing the expected sum of reward. But if you discuss with people running real clinical trial this day actually in the early stages they are not really concerned with curing patients while they are learned the efficiency of the treatment and they would be more interested by an alternative objective that is learn quickly which treatment is the best among the pool of candidate treatment because later in the next phases this treatment will be given to a much larger size of patients. So they would be more interested by this best-arm identification problem that I introduced here a little bit more mathematically. So the goal is to identify quickly the arm A star so the arm with the highest mean but this time without the incentive to draw arms that have high mean so we are only looking for a strategy that optimally explores the environment. So the strategy so the important part in the strategy we still be the way that we choose the arms that we want to draw at the current stage based on the previous history but we will also need some stopping some random stopping time telling us when we can stop the experiment so when we are convinced that we can identify the best arm after which we make a guess so this is our recommendation rule so a guess A hat for this arm A star and so several goals have been considered in the literature so I give two here so either we can fix a budget so we know that we can only samples capital T times the arms and then we want to make a recommendation that is as accurate as possible so that minimizes the probabilities that we make a mistake or we are we fix some risk parameter delta and we want to guarantee that the recommendation we make is wrong with probability at most delta and to reach this recommendation we want we want to need as few as few sample from the arm as possible so minimize what we call here the sample complexity. So this framework is can be a can model a clinical trial as I said but would be maybe more relevant for some market search application in which for example a company has to decide which product it want to commercialize and ELOs so he wants to identify the best product with high probability and the company is okay to lose a little bit of money during the learning in order then to make much more much more money so in this talk we will focus on the fixed confidence setting and more precisely given a class of bandit model so a set F as of possible a bandit model for example all the arm are some Bernoulli distributions we want to build strategies that are data-backed on this class that is we can guarantee that for any bandit model in this class it will output the correct the best arm with probability larger than one minus data and we among delta pack strategies we want to minimize the sample complexity and in this talk I want to tell you what is the minimal expected number of samples that we need to for a delta pack algorithm and the answer will consist in first lower bound on this sample complexity and then in exhibiting a delta pack strategy for which as expected sample complexity matches our lower bound so here it will be a distribution dependent lower bound so depending on the class of function of bandit models that we consider and we solve this problem for a particular type of one parameter bandit model so that we call the exponential family bandit model no I don't think I want to install OSXL Capitan okay so we study a class of bandit model in which the distribution of all arms belong to some sets of probability distribution that are parameterized by some real parameter theta called the natural parameter and whose density of the following form and so this class of distribution if we particularize the B function here we can recover a lot of well-known distribution like Bernoulli distribution Poisson distribution Gaussian with non-variant and so forth and a good feature of this class of one parameter models is that actually the distribution new theta can be also reparameterized by its mean because there is a one-to-one mapping between the natural parameter theta and the mean and as our parameter of interest in the bandit problem is the mean we will denote by a new sub mu the unique distribution in the class P that has a mean mu and I introduce here an important notion to characterize the complexity of my problem in terms of information theoretic quantity is here the kühlbach-leibler divergence between so to parameterized by the mean of the distribution so we introduce here d of mu mu prime as a kühlbach-leibler divergence between the unique distribution of mean mu and the unique distribution of mean of mean mu prime so we can give a close form for the particular example that I mentioned and here I give it for the Bernoulli case and so the class of bandit models that we consider will be this class F so to ease the notation as I'm going to consider a bandit problem of the following form where so each of the arms that depend on some parameter I will identify the so this set of distribution with the vector of their means and so I will consider all the bandit problem for which there exists one arm that is strictly larger whose mean is strictly larger than the other so the set of all bandit models that have a unique optimal arm okay so before presenting the lower bound and the matching algorithm I'm going to review quickly why I said in the introduction that the regret minimization objective is solved and we will find some inspiration about what we want to prove for the best term notification problem so regret minimization can be well an important result that dates back to the 1980s is a lower bound that has been given by lion robins on the on the regrets and that follow from the this rewriting of the regret as a function of the number of time each arm has been drawn up to time t so more precisely the regret is the sum over the arm of so here we have the suboptimality gap so the gap between the mean of the best arm and the mean of arm a multiplied by the expected number of times that arm a has been drawn up to up to time t and so to have an algorithm with low regrets we need an algorithm for which the expected number of draws of any suboptimal arm is small and so the lower bound of lion robins give us a limit on the number of times we we draw the suboptimal arm and tells us that we need to draw them infinitely many and more specifically that's the expected number of draws of a suboptimal arm a up to time t is asymptotically lower bounded by log t divided by d of mu a mu star so here is a cul-back-leibler divergence between the distribution of mean mu a and the distribution of mean mu star appears and of course we understand that the smaller these quantities the larger the the arm has to be drawn because we have troubles to discriminate between this arm and the optimal arm and so this lower bounds termites to define a notion of asymptotic optimality as an algorithm that matches this lower bound so for which the expected number of draws of any suboptimal arm a is asymptotically upper bounded by log t divided by the right information theoretic quantity so quickly I show you that there exists such an asymptotically optimal algorithm that is based on the so-called a UCB principle so it is a very simple algorithm that computes one index per arm and chooses the arms with highest index and the index will be some upper confidence bound on the mean the mean of this arm and actually in order to get so several UCB types of algorithm have been proposed but in order to get the asymptotic optimality property one has to be careful in the way we build the confidence interval and here you see we have a non explicit upper confidence bound that is computed using the function d itself so the kuback like blood divergence in the exponential families that we consider and so these types of confidence intervals follow by applying a Chernoff method and rely on some specific property of the exponential family but just practically to compute the index we just need to have so the function d of the empirical mean x as a function of x and then threshold at some level log t divided by the number of draws and this gives us our so-called KL UCB index and so this KL UCB algorithm has been shown to be asymptotic optimal through a finite time analysis so we we upper bounds so it was a per bounded in this paper the expected number of draw of the optimal arm a is a per bounded by log t divided by d mu a mu star plus some second order term that is a smaller than than locked so this proves the following we showed that so the infimum over all consistent algorithm of the limits of the regret divided by log t is exactly equal to the sum over the arm of the gap between the mean of the best arm and the mean of arm a divided by the feedback like live versions between mu a and mu star so this this results dates back to 1985 because additionally to the lower bound lion robins actually proposed an asymptotic algorithm asymptotically optimal algorithm however the algorithm was not really explicit or practical and so more more efficient algorithm were proposed afterwards until some algorithm like KL UCB that are simple to implement and also asymptotically optimal so for the best time in notification problems that we are looking at today I'm going to try to provide the first two steps and also we will see that the algorithm will still be efficient to implement in practice so I quickly remind you about the best time notification problem so whenever I will work with some bandit model model parametrized by mu I will assume for simplicity that the arm are ordered in a decreasing way and there is a gap between mu one and and mu two and as I told you the algorithm is made of three things the sampling rule the stopping rule and the the recommendation rule and we have to guarantee that for any bandit model mu the probability that the recommendation rule out spuit the optimal arm is larger than y minus delta and we want to have a small sample complexity so a small expected number of a draw of this arm so in the literature we can find a lot of delta pack algorithm for which bounds on the sample complexity are given so either in expectation or with high probability but the existing bounds scale like this so the order of the of magnitude of the sample complexity is a log one of a delta where delta is our risk parameter multiplied by a quantity that depends on the on the bandit model and that takes the form of a sum over the arm of so that here we have the sum of all the suboptimal arm of the square distance between the best arm and this arm and here we have the distance between the optimal arm and the on the second best arm so this looks like some so the square gap between the means in the Bernoulli case at least look like a sub Gaussian approximation of this defunction of the the Kulbeck-Leyblad divergence so somehow with this quantity the information theoretic terms are not yet identified and plus here in the big O there are a lot of sometimes even non explicit constants that are hiding and so my goal is really to have a lower bound and upper bound that match up to constant and involve some information theoretic terms so in this sense we can say that the optimal sample complexity is not yet identified so let's try to have some new lower bounds for for this problem so I will first introduce you some useful tools to derive lower bound and we will derive together a lower bound for for this problem so lower bounds for either regret minimization of our best arm notification relies on so-called change of measure arguments so the idea is that if we want to lower bounds the number of samples needed under some specific bandit model mu we will need to find another bandit instance lambda under which the behavior of the algorithm must be quite different and under which the all the arm needs to be to be drawn a little bit and so change of distribution can be quite technical but here I propose a useful lemma that we derived with Olivier K.P. and Aurélien Garivier that relates in a quite explicit way the risk parameter delta to the expected number of draw of of the arms through the feedback library divergence of an arm A under a model mu or under a model lambda and this inequality holds for every bandit model lambda that has a different optimal arm compared to the to the original bandit model mu so the results tells us that the sum over the arm of this expected number of draw multiplied by the information term is lower bounded by so this quantity is binary relative entropy between delta and 1-delta that is roughly of order log 1 over delta so this is a log 1 over delta that we saw in the state of the art so I'm going to explain you how to use this this result to derive lower bond so we will first try to so as we need a lower bond on the number of samples needed the first idea would be to separately lower bond the number of times each arm should be drawn and for example we fix here an arm A that belongs to 2k so suboptimal arm A and we want to lower bonds the expected number of times this arm has been drawn so the idea here is that if we choose a but a bandit model lambda in which only few of these terms are nonzero we will directly have a lower bond on the expected number of draw of A and so what we choose here is a bandit model lambda in which so for all i different of a lambda i is equal to mu i so the corresponding information term is 0 and for arm A we move the mean mu A slightly about the mean of the optimal arm so we set it to mu 1 plus epsilon so under this bandit model lambda the optimal arm is no arm A whereas in the first the original bandit model the optimal arm was arm 1 so this condition is satisfied and then if we write this in equality for this particular choice of lambda we get that the expected number of draws of A multiplied by a d of mu A mu mu 1 plus epsilon is lower bandit by kL delta 1 minus delta which give us the following lower bond and the expected number of draws of arm A that we when we take epsilon that goes to 0 so what we show with this argument is the following lower bonds so we have to repeat the argument for all suboptimal arm and then do also something for the for the optimal arm so we have that for any delta pack algorithm the sample complexity is lower bandit by so this is roughly log 1 over delta as seen by this inequality multiplied by this complexity terms and so here we have something like looks like an equivalent of the line robin's lower bond because here we would have so the distance between mu A and the optimal arm and here the distance between the optimal arm and and armed so for sometimes I believe that this was right lower band and that we could find an algorithm matching it but it turns out that this lower bond is not tight enough and actually we can derive the optimal lower band and actually it will be a three line proof so the idea is if we start with the very same lemma so the very same change of distribution actually in the previous proof which shows specific values of lambda and wrote the statement for this value but we can in principle as this all for every every bandit model that has a different optimal arm we can simply defend the set of mu as all the bandit model lambda that have a different optimal arm and then it holds that the infimum over lambda in this set of bandit model of so this sum is a smaller than log 1 over delta say and so as we want to lower bond the simple complexity we just artificially introduced it so we multiply and divide by the expectation of of toe and so at this stage we are not completely happy because this quantity depends on the algorithm through the expected number of draws of the arm and so the idea is just to simply upper-bond it by something that does not depend on the algorithm and if we note that this quantity sums to one so they form a probability vector we can upper-bond this by the supremum over all w in the simplex of size k so the vectors that sums to two one of this quantity and so we have proof in three line the very simple yet non-explicit lower-bond on the sample complexity so telling that the sample complexity is lower-bonded by t star of mu so some characteristic number of samples of the problem multiplied by log 1 over delta where so t star as this non-explicit form and this actually reminds us from some non-explicit lower-bond that do exist in the bandit literature so the first one given by grass and lie in 1997 was a lower-bond on the regret but for very complex model with possible correlation between arms so it was somehow natural to have something less explicit than the line robins lower-bond while the second result is a recent paper from last year that studies the best army notification problem but for a different class of bandit model so whereas we studied all the bandit model with a single optimal arm they were concerned with bandit model in which only one arm is different from the other so you have one optimal arm on a set of arms that all have the same values and so for this specific class S that is different from ours they also derived some non-explicit lower-bond so in going back to our setting what is actually very interesting with this bound is that we understood from the proof that this WA where a substitute for the proportion of draws so there was a ratio of the expected number of draw of A divided by the expected total number of samples so they can be viewed as the proportion of draws of arm A under some optimal strategy so if we take W star of mu that realizes the argmax here so we expect this to contain the optimal proportion of draws of the arm so at this stage you could ask me a question so can I really define this I mean I never told you that this argmax is unique and so what I need to justify this definition is to show you that this is unique and so well-defined and actually as a byproduct we will come up with some efficient algorithm to compute these these non-explicit values so we can start by being a little bit less ambitious and just fixing some W star in this argmax and actually because we are working with this exponential family bandit model we can we can try to give a bit more explicit formulation of the second optimization problem here in in lambda and actually an explicit calculation yeah that this is equal to the minimum for all suboptimal arm A so A different of one of this quantity which is a weighted sum of Kuhlbach labeler divergence and actually if you if we factor W1 in this expression we can see that this is not simply a function of the ratio between WA and W1 through a function GA that we defined here and a little bit of derivation show you that this function GA is a one-to-one mapping between R plus and some interval 0 d mu 1 mu a that will be useful in the in the sequel and so to refrate this a little bit more more explicitly we are going to introduce the following notation so we will work with X star so the ratio between W a star and W1 star and so with this notation using that the sum of W a equals 1 we have that W1 will be a 1 divided by the sum of the X a and so finally we are if we want to find the X star first we are looking at X2 and Xk star within this this argmax and so the next step is to realize that as here we have a minimum over K minus 1 function it is easy to check that at the optimum all these K minus 1 function have to take the the same values and so there exists some real value that I denote by Y star for which X a of X at the optimum is equal to to this value and so finally the optimization problems that we are solving can completely be reduced to a one-dimensional optimization problem so we just need so defining X X a as the inverse function of so the previously introduced function here we have to find a Y star that maximizes this function over the interval 0 d mu 1 mu 2 and actually it is possible to compute the derivative of the function and solving the equation the derivative equals 0 we can prove that so we can prove that there is a unique point realizing the argmax which show us that there is a unique optimal weights w star and so we more precisely we have the following theorem in the paper so that characterize the value of w star of mu in terms of this this function X a that I introduced and so showing that the computation of way Y star reduced to solving real equation so f f of Y will be some increasing function so we can just solve this by I don't know dichotomic search and then the evaluation of the function will be also reduced to solving K minus one smooth equation so in the end we have an efficient way to compute this important vectors w star of of mu and so the idea of the algorithm that we propose to attain the the law of one will be to try to match these these proportions and indeed that is what our tracking sampling rule is is doing so the idea is that so at a given stage of our algorithm we have forms the vector mu hat of the empirical mean of all the arms so based on the draw the draws that we have so far and so the tracking sampling rule first check whether there is an arm that have been drawn less than square root of t times at time t if this is the case then we are going to draw such arm so this is called the first exploration phase and if all the arm have been drawn more than root t times we are going to choose the arms that maximizes t of w a star of our empirical of vector of empirical means minus nat and so this sampling rule is built in order that with probability one the fraction of draws of the arm a converges to the target optimal value w star a of mu and so we can see here that the algorithm requires to compute the vector w star for our current empirical mean so at each step of the algorithm we are going to solve the previously described optimization problem in order to compute the weights and so now we have to so a key feature actually of the best arm notification problem is the stopping rule so we should stop as soon as possible in order to have a low sample complexity so the idea seems so the stopping rules that we proposed can simply be motivated by some statistical testing problem so if we introduce here the abt as some log likelihood ratio so more precisely here we have the maximum likelihood under the constraint that the mean of arm a is larger than the mean of arm b and here we have the maximum likelihood under the opposite constraint and it is easy to understand that high value of these statistics tend to reject the hypothesis that mu a is smaller than mu b and so we will stop when there is one arm that can be shown to be significantly larger than all the other in the sense of these generalized likelihood ratio ratio tests so our stopping time at t that I indexed by delta because of course the moment we stop depends on the risk parameter delta that we given can be rephrased in the following way so we stop when there exists a such that for all b for all other arms the statistic as the abt is larger than from threshold beta t delta and actually this stopping rule can be traced back to some old work by Chernoff in 59 who was working on the sequential adaptive hypothesis testing but he was concerned with a finite hypothesis whereas here our hypotheses are continuous because we we want to check whether mu a is larger than all the the other arms but he already this paper gave us a lot of intuition on how to devise a good good strategies okay so of course again in under our exponential family assumption we can give an explicit formulation for this generalized likelihood statistic showing especially that if mu at hey so the empirical mean of arm a is larger than the empirical mean of arm b we have the following expression that feature here so this quantity is important so it is the weighted sum of of empirical means so a bit like in the lower band we have here a weighted sum of of information theoretic terms and so I I define that these stopping rule with this a generalized likelihood ratio test ideas but actually there are several possible interpretation one of which is related to the lower bound that we give and so the stopping statistic can actually be shown to be equal to t multiplied by the infimum for lambda that belong to so alt of mu hat t so all the bandit model in which the optimal arm is different from our current guess of the sum of this quantity and so we understand that if we use a sampling strategy such that here these fraction of draws of arm a converge to w star the solution of our optimization problem here we will more we will recover our complexity term t star of mu and so if we have a good sampling rule and we believe this is true then we should stop when this quantity exceed log one of a delta because we would exactly recover the sample complexity of t star of mu multiplied by log one of a data but we will see that we shall need a slightly larger threshold for stopping and I will give you here some some pointers on how to choose the the sampling rule so before that maybe another interesting interpretation for the stopping rule is in term of information theory in terms of the minimum number of bits you need to code the sequence of zero and one produced by arm a and arm b so again our generalized likelihood statistic can be rewritten using the channel entropy of the the distribution in the following form where here actually we have the number of times we've drawn a and b multiplied by so the channel entropy of mu a b t so this represents the average number of bits we would need to code all the see all the samples produced by both arms and here we have the number of bits needed if we separately code the samples obtained from a and the samples obtained from from b so another interpretation is that we stop when this quantity is significantly larger than this one meaning that is this really more closely to encode a and b together and so that a and b should really be be far apart so this is another interpretation and I think I won't have the time to go into into details of the proof but this there will be some useful information theoretic arguments and so what we prove is that if we choose as I told you a threshold slightly larger than log one of a delta so more precisely log of something multiplied by t divided by delta then we have a delta pack algorithm and this this choice come from the the following results that we prove telling that if we have a domain of arm a that is smaller than the mean of arm b then for any sampling rule the probability that there exists t such that the abt exceed log 2t divided by delta is smaller than than delta so I'm not sure I have I have time for this but I will just try to so the way we prove so usually to prove this kind of thing we use some concentration inequality and here I also use some change of measure idea and so to prove this if we introduce a tab as the first time at which this this statistic exceed the threshold log 2t divided by delta then all we have to prove is that the probability that tab is finite is smaller than than delta and to do so we are just using the definition of the abt as a likely ratio statistic so the event tab equal t means that this likely ratio exceed 2t divided by delta and then the probability that tab is finite will be the sum for all t's of the expectation of the indicator of that event and then the idea is just to upper bound one by this divided multiplied by delta divided by 2t so a bit like the trick you use to prove Markov inequality and so you're you're left with with this quantity and then using that mu a is smaller than mu b here you can lower bound the denominator by the value of this for the particular instance mu a and mu b that we we consider so we have the the following equality and then what is what will be useful is that when we express here the the integral so I do it here in the Bernoulli case so there will be a sum of all the possible observed observed reward up to up to time t this is going to cancel out with the the likelihood term and here we would be very happy if we had here a probability density which is not the case because of the maximum here and then so the information theoretic idea that we use in the proof is an existing uniform bound over the the likelihood of of a Bernoulli random variable in term of this quantity called kt of x so it consists it is a partially integrated likelihood here pu of x is the likelihood of the sequence of observation we have under a Bernoulli of mean u and then we integrate u under some some probability distribution here and so upper bounding this quantity here this p mu a of the sample gathered from arm a by by this quantity give us something that is a probability distribution over the sequence of 0 and 1 observed and then the the change of measure idea comes from the fact that here we can interpret this sum as an expectation under some alternative probability space of the indicator of these events and this allows us to give us an upper bound by something which is a probability and which is therefore smaller than one so this is a new idea to derive this kind of proof compared to use to usual concentration in a quality that are that are used in other in other works and that could also be used but to obtain less precise result okay so to summarize what what algorithm did we did we exhibit and what guarantees did we prove so the algorithm I call it the track and stop strategy so it uses the trapping the tracking sampling rule that I presented so the idea to match this vector w star of optimal optimal proportion of draws then we use the turnoff stopping rule so based on this generalized likelihood ratio statistics with this this threshold and the recommendation rule when we stop we are going to choose the arm that has the highest empirical mean and so we prove that this algorithm is delta pack and satisfy that the limit when delta goes to zero of the expected number of samples needed divided by log one over delta is equal to t star of mu which was or the characteristic time that appeared in in our lower bound so it is pretty easy to prove this equality almost surely so the technical part is to handle the expectation and so I will skip the proof but it's it takes one slide so if one day you want to look in the paper it's also it's also very short so I want to spend a few minutes on the practical implication of this this work so how does our track and stop strategy compares to the state of the art algorithm for this problem so in order to do so I have to quickly introduced a few algorithms that can be used for these problems and usually they have two kinds either they are using upper and lower confidence bound so there are counterparts of a UCB types of algorithm for the best arm notification problem or they are using some elimination principles so you start with all the arm and then you successfully eliminate the arm for which you're convinced that they are not the optimal so the first algorithm I present is of the first kind it's called a KL LUCB so it is a bit reminiscent of the KL UCB algorithm except that is going it is going to use an upper confidence bound so based on this D function again and also a lower confidence bound and here whereas in KL UCB we had a log t here we have something that depends on the t and delta and which is equivalent of the threshold we seen before so the algorithm samples actually two arms at each round so it samples empirical best arm the arm that maximizes mu hat 80 and it also sample the arm among the suboptimal arm that has the largest upper confidence bound so here the arm in bold is is sampled and it stops whenever the lower bound of the optimal arm is larger than the upper bound of all the other suboptimal arm so somehow when the confidence interval are separated and then of course we decide for the empirical best arm so the other algorithm that I called KL racing is of the second type so it maintains a set R of remaining arms so all the arm still in the race and so it proceed in rounds at each round we first draw all the arms that are in the set of remaining arms we compute the empirical means so for each arm still in the race we have we have drawn each R time so the empirical base mean is based on R samples then we can compute the empirical best arms the one with highest empirical mean mu AR and also the empirical worst arm and if the best arm in significantly larger than the worst arm we are going to discard the the worst and our criterion to perform the elimination is a bit similar to the KL LUCB algorithm is based on confidence interval and more precisely we eliminate the worst arm if the lower confidence bound of the empirical best is larger than the upper confidence bound of the empirical worst and then we we remove WR from the set R and so we stop where there is a single element in the set of remaining arms that we output at the our guess for the the optimal arm a star okay so this is kind of a generic algorithm where you could replace the elimination step by some other criterion and so empirically we also tried to improve this procedure by replacing the elimination step by so we stop if the likelihood statistic of the best empirical best versus empirical worst exceed some thresholds and so this in a more explicit form this amounts to eliminate an arm when this this all true for a equal the empirical best and b equal the the empirical worst so we call this the Chernoff racing algorithm because it is a racing type of algorithm but that uses our Chernoff stopping so I present here numerical results on two Bernoulli bandit model so one with four arms and the other is five arms so I give here the the explicit value of the arms together with the optimal proportion of draws in each of these two models I also mentioned that in practice the threshold function or the exploration rate in the confidence interval are all set to a log of log t divided by delta whereas it was proved optimal for I think here two times t so they are a bit smaller than what is allowed by theory but they are still very conservative and so with this choice so we implemented the track and stop strategy the two state of the algorithm or also and also our improvements for a racing type of algorithm based on the Chernoff stopping rule and what we see that compared to the state of the arts the sample complexity are divided by by two roughly so there is a huge practical improvement so this is run for a specific value delta equals zero one but of course the trends the same trends occur if we take smaller value of of delta and an interesting phenomenon that we can highlight is that on the first bandit model Chernoff racing perform kind of similarly than a track and stop whereas on the second it performs worse than KLU-CB so it is more it is less robust to a very use bandit problem and the reason for this in in the way we build the algorithm a racing type of algorithm is going to have two arms that have been drawn the same number of times because we stop when we stop there are still two arms that have been grown the same number of time and somehow so if we looked at the proportion of draws we would here have two equal value in the empirical proportion whereas so in the problem two the proportion of draws on of one and two are more separated than in the first problem and then it's kind of normal that's a racing type of algorithm perform less good in this model so to conclude we we proved the following for best time notifications so we computed the value of the infimum over pack algorithm of the limit of the ratio of the sample complexity divided by log one of a delta so we propose a little bit more explicit formulation in the paper and more importantly an optimal sorry a characterization of the optimal proportion w stars that relies on max here and that permits us to derive an efficient strategy matching the the bond so there are there is plenty of future work because here the analysis we propose is really a synthetic so we would want a finite time analysis just like what exists for regret minimization and also we can imagine other ways to use this knowledge of the optimal weights and so we would want to combine it with other successful heuristic in the bandit literature like the use of UCB's upper and lower confidants bounds or the use of time something thank you so I'm wondering what you would lose if so if you want to find the best arm to go back to your order problem which was to minimize your regret one possible way to approaches is to try and first find the best arm and then systematically play that best arm so how much would you lose if you were using that approach so it would be suboptimal I think so this this kind of strategy actually have been proposed it's called explorers and actually you propose to dissociate the exploration phase and the the exploitation phase and I think because the the two com I mean if you were to use an optimal algorithm for best army notification in the first phase and then choose always the best empirical best until the end I think you would have some in your upper bound in the upper bound you could devise for the algorithm you would have this t star of mu appear somewhere and I don't think that you you would end up with the complexity of the regret minimization problem which is much more simple and explicit yeah I think it would be a constant factor in front of the lock but it could be at least twice this number and actually I tried empirically with not with the optimal algorithm but with good heuristic and for regret minimization it is still better to balance exploration and exploitation as we go on that to to dissociate them have you considered as your metric for success instead of the probability of picking the right arm the error in the mean yes this is called the the simple regrets so the optimization error somehow so your criterion is to minimize mu star minus mu of the arm you get so meaning that yeah so I have not so some work in the literature have considered this measure which is I agree a bit different from just the probability of error but for example I don't know lower bound for the simple regret and yeah it would be actually in an interesting future work as well I have a question related to the last one if you think of a clinical trial what if you just want to find the best treatment within 5% compared to the best trial in particular you can choose the case where you have several equally good yeah so this is a natural way as well to relax the problem so you fix some epsilon so 0.05 says and your goal is to output an arm whose mean is larger than mu star minus epsilon so this is also a relaxation that have been considered in the literature and you can adapt the algorithm to handle that case but again for the lower bound I wouldn't be able to derive a lower bound that would feature that would incorporate that epsilon for the moment but yes there are algorithm with upper bounds that involve like the the maximum between epsilon and the minimal gap between the arm or something like that so the idea would for example if you use a racing type of algorithm would simply be well if after one divided by epsilon square log 1 over delta you still not stop then you stop and output whatever arms this would work as a heuristic for this this problem
This talk proposes a complete characterization of the complexity of best-arm identification in one-parameter bandit models. We first give a new, tight lower bound on the sample complexity, that is the total number of draws of the arms needed in order to identify the arm with highest mean with a prescribed accuracy. This lower bound does not take an explicit form, but reveals the existence of a vector of optimal proportions of draws of the arms, that can be computed efficiently. We then propose a 'Track-and-Stop' strategy, whose sample complexity is proved to asymptotically match the lower bound. It consists in a new sampling rule, which tracks the optimal proportions of arm draws, and a stopping rule for which we propose several interpretations and that can be traced back to Chernoff (1959).
10.5446/20246 (DOI)
Thank you very much. So I'd like first thank organizers for the invitation to speak here. So it's a great honor to speak in a conference celebrating Arthur's birthday. And I guess I should apologize at the beginning that I'm not really, you know, I'm kind of new to this topic and probably I make mistakes. So experts should correct me at any time. So like as I guess Professor Bloch just said, physics is difficult. And I think periodic hot theory is also difficult for me. So I just try to appreciate the box. So let's just record the following. This, let's put it a little bit. Set up. So let's see, start with k over q p finite extension. So it's a periodic field. And recall that, let me write that here. So row of periodic representation, one of the Galois group to some vector space over q p is called the ROM. If you consider the following, you first turns v over q p with the fountain period ring v drum takes the Galois invariance. Then its dimension is the same as the dimension of v over q p. Okay, so this concept is, this notion is important because we know that any periodic representation coming from geometry, namely, realized as action of the Galois group on the et alchemy of some smooth projective algebraic varieties satisfies this property. This is known as, I guess, as a fountain's conjecture and proved by various people. So the situation I'm interested, very interesting, yeah, I should say this is a joint work with, so the situation we're interested in is the following. So suppose x over k, so let's say smooth, let's say connected, rigid analytic variety. So I could start with a trig variety, but eventually I need to go to the analytic geometry. So let's start from here at the beginning. So let me suppose L is, let's say q p, or maybe let's see, just for some place, yeah, it's the same thing, I guess. Let's see, z p local system on x, et al local system. And suppose, so if you are given a finite extension k and a point on the variety, let's see, then you can consider the stock of this local system. So then the stock, it really can be considered, I mean, you probably want to consider the geometric stock over a geometric point, and then the stock is going to be a representation of the Galois group. So it's going to be a periodic representation. So here's the theorem. I'm going to explain. So the theorem is following. If for some point, one point x Lx bar is drawn, then Ly bar is drawn. For every point, let's see, y in some finite extension could be some other, over any finite, namely over any classic point, the corresponding Galois representation is drawn. So you will see, in fact, the proof is really simple, but let me first make a few remarks and give an application before explaining the idea. So let's start with remark. The first remark is you can ask the same question if you replace one by crystalline, but then, of course, this is not correct. Like there's different family of varieties at most points that has good reduction, at some point there's a similar stable reduction, for example. You can also ask the hot state version. I'm not sure. I guess it's fourth, but I know I can't read the example. The second is, this is the situation you can think about. What about for the semi-stable? I don't know. I mean, it should be correct, but I just understand what is B-dram so far. Maybe later after I understand, most sophisticated field of readings could be. So the second is this L, you can think about it's a family of periodic representations parameterized by the variety X. So usually it's called a geometric family. But there's another notion which is important in studying Galois representations called arithmetic family of periodic representations. Namely in that situation, instead of having a total local system on variety, you consider the Galois representation, the Galois group acting on a vector bundle on some rigid analytic space. So in that situation, I think the story is completely different. I think it's probably this arithmetic family. Then there's, I think in this situation, if I understand correctly, it's proved by Brej and the comment that this drum locus is closed. Under some condition, analytically closed. It's very different from this geometric situation where you just require the representation to be drum at one point. You get the drum list everywhere. So I think one should really compare this with this result of the name which is called the principle B of the link, which is the title, which says the following. So if suppose you have a x over s is like, let's say, smooth, proper map of complex varieties. And let's say s smooth connected, complex entry. So then if you have a family, the family of hodge club, what's this? I check the book, which says, ie, a global section of, it's really a global section of some sheath on s, which should be horizontal with cosmetic connection. Then if Tf. It's a family of hodge class parameterized by s. What does it mean, it's a global section of something? I mean, it's written in the book in this way, in the introduction. It's a little bit complicated. I need to specify drum component, et al component. So there's a section about nf plus star of the arm. At least there's a drum component, but you may also want to add a et al component too. They're talking about absolute hodge cycles? Yes. Absolute hodge cycles. Yeah. If Ts is absolute hodge at one point, it is absolute hodge. At every point. So this is what he called principle B, which gives, I guess, such as the title of the talk. So maybe before, again, let me explain the application of the theorem. In fact, also the motivation of this work. Let's start with, let's consider G, a reductive group, and X is a family of hodge structure with G structure, namely it's basically several copies of Hermitian symmetric domains. Then attaching it to you have, and let's see, K is an open compact group, in the geofinite ideals, then you get the Schmura varieties, which is like, at the level, it's a quasi-projective variety whose set of complex points, C points, are just given by this double quotient. But the theory of canonical models says that, in fact, this quasi-projective variety defined over some number field, called the reflex field. Let me not go into detail, but just see there's such thing. Now let me make the following assumption. Let's assume, let's see, if the Q point of inner center is discrete, I mean, this is a technical assumption you can ignore in the final ideals. So make this assumption. Okay, suppose I have a V is a rational representation of G, then I get, in fact, a Bayesian local system on this complex variety, just the social construction, because GQ acts on this vector space, you can take the social construction. And in fact, the theory tells you, in fact, what you get is an etholical system on E. Okay? For NMP, you get a really etholical system. I mean, a power is defined over C by distance. You are working with which kind of entire shapes? Q and shapes? Q, Qp. So fixed P is Qp. It's a Qp etholical system. Fixed NMP. Uneshumorability. Uneshumorability. P is a rational representation defined over Q. Just over Q and over Q. Yeah, rational. Q rational, let's see. And then you specify some P to talk about the etholical system. And you get something over L. This is the setup. It's general theory of the canonical model gives you such thing. And then the color is, in fact, for every f over e finite extension and a point f point then the corresponding the stock is a galore representation of a galore group for f is drum at P. So this is going to be the colliery. The proof is simple. I mean, this is some observation of a Kaivon line that, in fact, because on the stream of gravity there are some very special points called spatial points. You can check the corresponding etholical system is drum there. So by the theorem it gives you everything. So I should see this. Again, there's a remark. This color is not new if the stream of gravity is some special kind of stream of gravity. And if this is known, if I guess Gx is of abenian type plus some conditions. Because in this situation it is known that corresponding Schmura variety just parameterized certain abenian motifs. I mean, those some motifs appear in the abenian varieties. And this local system or just the local system of their periodic realization. I think you mentioned the Kaivon line. What is that from? What is that? I mean, this is, I mean, how to prove the colliery. Yes. So this is a follow-up suggestion of Kaivon line. So this is known for Schmura varieties of abenian type, but it's not known in general. Because for general Schmura varieties, I mean, it's expected, I think from the name, that the Schmura varieties should parameterize motifs. But we haven't, in general, we don't know where to find the motifs. And nevertheless, we know like their periodic representation, periodic realization should be drum. So maybe at one day, maybe we could prove some parameter conjecture before finding the motifs. We'll get something we need. Okay. So but not known in general. It is not? No, it's not. Okay. This wasn't known. Okay. All right. So, okay. So anyways. I don't quite understand in the special case of abenian type varieties, how the V enters. So really, I said I need some condition. In fact, not every V. It's some V really attached to abenian. So I said that there's some condition. Not every version. Even in this situation, only for some V, you know. But now it's a. Your carolary gets for every V? Yeah, every V. But I want to make this assumption like the center, Q point of the center is discrete. One can relax this condition then. It's not for every V. But I mean, first you need to descend, construct the local system on the Schmarvite. That requires not for every representation you can construct. Okay. So anyway, so this is, I mean, this, the whole idea is try to get what every, you would expect like, for example, try to construct the stuka on this kind of Schmarvite without finding the motive. Okay. So I think that's the introductory part of the talk. Present part. So now I want to get to some idea, some idea of the proof, which you will see kind of easy. Sorry, I have a question. Is the theorem false for QP local systems that don't have a ZP lattice? I think it's okay. Because locally, you see this is really a local theorem. It's a local theorem. So locally you do have ZP lattice. But let me just, okay. So let me, so there are two ingredients needed in this proof. So let me see the first one. The first one is we need this reason, like this pro et al topology and on the rigid analytic writing the periodic shifts introduced by Peter Schultz. So now let me assume X over K is a smooth curve. Just to prove the theorem, I can assume X is curved, but you can just save the notation. So then let me introduce a few more notations. So I need Kn, which is K, edge or into the second order. So one and we choose it compatible and it gives you the element inside the K flat where K is, I guess, just takes the union of all of them takes the periodic company. Okay, so I also need new the map from the pro et al site, maybe from the et al site, which appeared yesterday. So a typical object here, let me just draw the picture is a following. So I can consider u some open et al open subset and take a Toric coordinate. So episode stands the Torus, which is of KT inverse, okay, T inverse. And then I have this tower of finite et al covers. And then there's this project limit formula I denote by T infinity hat. And I just take the fiber product. So this is like this system, in fact, gives you a, this is a tower and the finite et al that gives you an object in the pro et al site, which I, let me write it as, let's see, a plus plus and this is a plus. So infinite, let's see, so infinite plus is you take the limit and complete and then infinite is infinite plus, you invert P is standard. But we want to consider the Galois group of this tower. So we get gamma, this Galois group, T, so the quotient is the arithmetic Galois group and then I have this geometric Galois group, which is isomorphic to ZP1 with a generator gamma, gamma x on, I guess T1 over P to the n, that's zeta P to the n, T. And here I also have the cyclotomic character. Okay, bye. Okay so this is basic setup and I also, let me see, I can go this way. All right, so what we need is really some periodic shifts on this space. So we have, what we really need is, okay, so let me try write it, okay, let me write it. So I have integral shift of O x plus, which is on the pro et al side, which is just pull back of integral shift from x et al and it's a complete, periodic complete version. So inverse limit, so what we really need is, and of course I mean the rational version, it's just the u inverse P, but we also need this important periodic shift OB drama. I mean this appears before, but I follow Schatz's notation. So it's defined as follows, I mean, yeah, maybe let me just give you the definition. So I have, first of all I have B, which is this A in x, appear in yesterday's talk, I guess, inverse P, so then there's OB in, which is O x, which admits a map to O x hat. So I mean, because of this, I could start from edge break situation, but here because this completion, I do have to go to the analytic situation. And then OB drama plus, it's just the completion with respect to this map. And then you OB drama, this OB drama plus, inverse T, okay, as usual. So I mean this is complicated, but let me just tell you what I really need. This is something quite complicated. So what I need is the following, some properties. So first of all, by definition OB drama is O x module, because OB inverse is an O x module. So the second property is there exists a filtration given by the filtration by the kernel of theta. So there's a filtration such that if you restrict, I mean, in general, it's very hard to describe this shift, I think, except if we pass to some object in the pro-atlaset, like U infinity hat, indeed, we do have some description. So grow I OB drama is going to be O x hat. You restrict to U infinity, then you adjoint a variable. Here V is T inverse log of T flat T. I mean, this is, the I really comes from the action of the arithmetic gauss. So it's a tensile is cyclotomic character. So then if you just change the definition, the element of gamma acts on T as it maps T to T, sorry, acts on V, maps to V plus 1. OK, so another important thing I need is there exists a connection on OB drama. So OK, so this is O I. So maybe. So what does mean? I mean, it means, OK, really, there exists connection means probably I should put it here and write down the meaning, there's a map from OB drama to OB drama to the service omega x. So omega x, which is just the pullback of the usual sheet from the et al site. OK, so now, really, the theorem comes from the following two propositions. One, if I consider EI, which is the I direct push, derived push forward of this shift. Oh, by the way, if I have a et al local system on x, I can pull back to x pro et al to get a local system there. So if I just do this push forward, this is a vector bundle. And proposition two, if you consider its stock, I mean, sorry, its fiber and one point, let's see if the x is the resistive of x. Then this is exactly, I think, consider the HI of the Galois homolog of this L x bar turns e to r. So to get these two propositions would imply the theorem, because I mean, you get a vector bundle if it's a rank and one stock is the correct rank, and you get the rank is correct everywhere. So in fact, to prove proposition two, I mean, I just need the E0 in proposition two, but to prove it, you need to use all EIs in proposition one. Just consider E0 is not enough. But to prove this vector bundle, the crucial observation is because it has a connection coming from the connection on OB-DRA. So it's really a vector bundle with the connection. Integrable connection. Connection means the integral connection. So to prove it's a vector bundle, you just need to prove it's a coherence shift. So that's, I think, the main observation, one of the main observations. So it's a finite problem to prove some shift, which is a probably huge, this is a huge ring, and this pro et al cover is huge cover. But what you get is something is finite. But anyway, so if you look at what you really end up with, if you found a local system, you get a vector bundle with connection. So it's kind of a Rimmer-Hilbert thing. But the only problem is that it may not have the correct rank in general. But if it has correct rank at one point, it's correct everywhere. So it's basically the Rimmer-Hilbert. OK. So what I really need to... Oh, I can move this. If it is the run or the higher H i 0. Huh? If it is the run at one point, are the higher H i 0? No, H i is not 0. H 1. I guess no H 2. So OK. So I really need to prove this coherence shift. What's the problem with your two? Is that difficult? That's not difficult. But yeah, I mean, if I have time, I may be just commenting. But the key crucial thing is the proof proposition one. So yeah, like OB drama is much larger than B drama at the beginning. So you need to do something. But OK, so to prove it's a coherence shift, it's really a local statement. So I can assume like X is... I mean, I can assume X is like this. This is my X, right? So then the starting point, how to calculate some direct image. So then the starting point is really this business of like this perfect toy thing appears. This you first, the... If you just... The HRI, the I-derived push forward when you evaluate at U, which is there, this is really can be calculated by some Galois code model. Otherwise, you don't know how to calculate anything. But you just... I mean, this really follows from, I guess, this is implied by R because there's no higher homology when you restrict this shift to O infinity hat. So you can also consider the... You can consider R-nuller star in the derived category, evaluate it on U and take the I-cromology loop. And this seems to be more related to the right-hand side. Then you have the right-hand side shift. Shiftifies the right-hand side. So the right-hand side is a pressure for shiftification is R-I-nuller star and you have to prove that it is actually the shift is something else. I think at the end I'll prove the right-hand side... I think I'll just... I guess I should prove the right-hand side is a shift directly. Okay, so now I want to... What I need to show. So really I want to show the following. So this is H-I, will be drawn. This is a finite A module and compatible with space change. A is here. It's just a global situation. Yes, this is what I mean. Okay. Yeah. So all right. Okay. So what? So space change of A. With a top-space change. If you have a top-space change. It's A module and then you turn it over B, turns it over A, you get what? Okay. So... Yeah, so probably I... I think I'll prove that it's flat but we'll do that later using a connection. Yes. Flatness follows from the connection. But usually it proves to both base and GD that flatness also. Oh, but here it's because at our base... I just do a top-space change. Yeah. Okay. So right. So now I use the fact that there's really a filtration on this sheet. So what I really need to show is that after I take the associate graded, I get what I want and the homology for associate graded. If the... Let me just write it down. So what I really want is this guy. So want. H i of gamma on the grow i, B drum. Maybe turn service. L? Grow j. Grow j. Different. This is finite, a module, compatible with space change and vanishes if j is not enough. Isn't this touch tape? This grow j, I'll be D, Dram is... I think it's... Open touch tape. Yeah, I think that should be over. You said it would prove the theorem... But I mean, I didn't know it's a... This guy may not be a real little vector problem. What do you get? Different sense for... Okay, so this is what I want. So the problem is how to calculate this, but let me just denote the... Let me write M infinity hat as I evaluate this shift. At the U infinity hat, which according to this expression... So what you really want to calculate is... So what I really... This guy is really just the H i of gamma, M hat, H is one variable. Okay, so here comes the second k-ingredients. J... Yeah, of course, I need to... This is... Yes, of course. Okay, so here's the ingredients I need. Let me first formulate it in a way that appearing in the work of Catalya, Liu, but later on I will explain it also in another way. So here's the theorem. So for sufficiently... For sufficiently large n, there exists finite projective aM submodule. So I guess I remind you, aM is just the ring of functions on U n inside the infinity hat, such that stable under the action of gamma and induces this gamma equivalent isomorphism. And the H i of gamma of the quotient is zero for any i. Okay, so this is a statement we use, but let me explain it in another way, which more... Oops, it's time, mister. How do you get this v? Huh? When you say the situation that you have on O v the rank, so you take the kernel of... To complete the ranking, to kernel of theta. Oh, why get the v? So you should get things which are like rank one modules of an O X roof when you take this filtration, yes? Yes. And now how... So is this the filtration you look at or...? The filtration on... Yes, I just wrote. Which relative to the power of the kernel? Yes. Yes. But when I take... yeah, but when you take a T inverse, it becomes this v appears. I mean, it was in Peter's paper. The p is the... You invert... You take the filtration on O v drum plus, according to the kernel filtration. After you invert T, you get something larger. T. Yes. T is the... Yes. Yeah, yeah. That's something. So, all right, so let me... I have both. Ten minutes, I guess. 12 minutes. 12 minutes. So let me get to start. The point is, in fact, this part is somehow strengthening the... This periodic Simpson correspondence, first considered by Fautens, but also Abess and Groes. So let me just understand that content. So see, I can... Because this question is local, so I can assume... Let me assume L. This is... I mean, I start with a Zp local system. That's good. So this is small, which means L modulus on p to R far is a trivial for R far bigger than this bound. Okay. So in this situation, what... In fact, what... I mean, I just really need a very simple part of that correspondence, Simpson correspondence, which is, in fact, there exists a unique MK finite projected AK module with a linear action of the geometric fundamental group, such that m hat infinity is isomorphic to MK tensor AK of what, infinity, gamma geometric equivalently. And in addition, the... The commollage, the tri of gamma geometric on MK is the same thing as the tri of gamma geometric on M hat infinity. This is just a very... I guess, very basic version I need. So basically, if you just take... I need to just mention, if you take the log of theta, that gives you the hex field. And that's... I guess... Let me ignore the tape twist. So that's a... That's a Simpson correspondence. And this statement says that the... The hex commollage... Hex commollage also calculates the gamma commollage here. So this is something you need. So therefore, what do you... So then you reduce... One can show... If you just consider the hi of gamma geometric on M hat infinity of this V... So now I can ignore the cyclotomic character for a moment. This is going to be... There's a map. The actual map of MKB. And this map is going to be an isomorphism. One can show this easily. So really, I want to calculate the hi of this guy, which is... It's good. It becomes smaller. It's much smaller. So okay. So now the geometric fundamental group, gamma is really simple. It's just topological group generated by one element of gamma. So you just do the following lemma. You just do the following lemma for a linear algebra, which is good exercise. So let M be a Q vector space with gamma as an automorphism. Then you have this lemma write M, a generalized invariant subspace to be just those M, such that the gamma minus one M is zero for some unargin of just a generalized eigen space, Christmas one. Then you see what you get is the following. Then... Yeah. And let me consider the final action, gamma act on M, actual intervariable V, as just as before, MV goes to gamma Mv plus one. Let's see, pi goes to pi. So it's just the... You turn through these two actions. One is on M and one is on Qv with gamma acts by shift. So basically the D algebra acts as derivation. Then the gamma invariance of this guy is just M generalized invariant. And if M equals M generalized invariant, so if all the space of the eigen space, the gamma co-invariant is... If I remember correctly, it's also... It's zero. Okay. So the linear... Little linear... Exercise in linear algebra. So now what I really need is the following simple but crucial observation. Mk, generalized invariant, this equals Mk. So you can... Now you can apply... Therefore, you can apply this lemma to the situation. This lemma tells you this guy would be just Mk if i is zero and zero if i bigger than zero. Then you reduce to calculate certain Galois co-molature for the arithmetic Galois group, which is kind of... I mean, the final is kind of easy there. So this lemma, I think this is interesting, is really equivalent to say the hex field. Interesting. It's new potent. Okay. You really get the new potent hex field. But this is not really quite surprising. Namely, originally the periodic simple... Certain correspondence is a local system gives you a hex field. But now my local system L is not just a local system on X capital K. It's really a local system defined over this X over little k. So there's action of arithmetic Galois group on it. So that's... In fact, the fourth is the hex field is new potent. Basically you just check because of... Let me write the generator here as delta. Because of this delta, when you base change to K, it's isomorphic to... Well, base change to K, it comes from arithmetic fundamental group, local system on X little k. This would force this Mk with this hex field is isomorphic to Mk of chi, delta, theta. Maybe plus or minus, I can't remember. That means the characteristic polynomial of the hex field must be... I mean, all the coefficients of the characteristic polynomial of hex field must vanish. So it's new potent. So this is exactly the classic proof by like Simpson when he proves if a local system support a complex variation of hot structure, right? It... Is it true that your local system is a hot state local system? That is under its... Why? But you are proving that theta is an important thing as you said. In Simpson, it corresponds to variation of the structure. Yes. In this case, it corresponds also to... Really? Okay. Hot state is a local system. The RAM? Is it the RAM? At the end, it's the RAM. At the end, it will do the RAM, but it will... No, no, but the whole system is the RAM? It could be a RAM or hot state because it can take all the new... Maybe, I don't know. Yeah. Maybe, I don't know. But I mean, I just want... I think this is amusing fact that if the local system really comes from arithmetic, it has a hex field, it's new potent. I was just observing that this is purely analog of quote andex argument for the local one. Yeah, I see, pulling the quotation, the characteristic polynomial, because it's strong action. Exactly the same. Yes. Yeah, okay. So, okay, I guess from here you get this, then it's... The risk argument is... It's not... Yeah, I think I probably stop here. So, just one question you described, you assumed that n is small. I mean, it's not really assumed. See, in this theorem of Kala, you know, they pass to some a n, which is really go to something, like a small tire problem. Very 질문 senseless. Thank you.
Let X be a smooth connected algebraic variety over a p-adic field k and let L be a Q_p étale local system on X. I will show that if the stalk of L at one point of X, regarded as a p-adic Galois representation, is de Rham, then the stalk of L at every point of X is de Rham. This is a joint work with Ruochuan Liu.
10.5446/20245 (DOI)
Thank you very much for inviting me to speak here. It's a great pleasure. Well, we have collaborated with Arthur on one loan project, I think, for about three years, and this was, I think, the most fruitful time for me, mathematics. In addition to that, I learned from Arthur what a log scheme is and what a Baytich homologer of a log scheme. And in fact, this, the work I'm going to talk about is, well, grew up also from a question that we discussed a while ago with Arthur. And so the question is a full in. So I have four log schemes. And then I have various homology theories. Well, for example, I can look at a Tal homology or say, the RAM. OK, and so if my log scheme happens to be defined over a finite field, so then, so this Tal, et al. et la de homologie of X with QL coefficients, well, it's a vector space which is a cryptic field with an action of the Frobenius. And one can, co-merital, right? And one can check that for a reasonable scheme X, the eigenvalues are all real numbers. Of course, they are of different weight. But in particular, because they are real numbers, this gives me the weight filtration. And so the question is, so let's see whether I would use it. So OK, so can one define this filtration geometrically? Well, so I want a definition which would make sense for all these three theories. And well, more precise, well, so you can ask for rational beta homology or you can ask even whether it's well defined integrally, as for usual singular homology of algebraic varieties. So or well, so one can ask whether one can associate a motif. A Vodskiy motif. Motif to this X. Or maybe even a finer question, material homotopy type. Well, I want to tell you, so a little bit about this question. So what kind of homotopy type you expect to have? So this Vansper, so lies in the construction due to Katya Nakayama. So OK, so I will recall it here. Nakayama construction. So they defined beta homology. So what's OK? So that's beta homology of a Loxy Max. So Z coefficients, any coefficients. Well to be by definition homology of a certain topological space. So this is just usual. So this is X log. So that's what Arthur explained to me. So this is just a topological space. Well in many cases it's a. Yeah, yeah, sorry. This, yeah, my X is now over complex numbers, of course. Thank you. Thank you. So X is a log scheme over the complex numbers. And so this X, well it's just in general, well it's a topological space, but well in many cases it's a manifold with boundary. It has some additional structure. So what is it? So this is a topological space which is equipped with a map. It's proper map to X. And so the construction is the following. So well I have my log scheme. So I will denote by X underline then the line scheme. And M is the sheaf of manuets. So well, so I have the sheaf of manuets. I have a star that sits inside it and I have a map to O. Okay. So well the only thing I care about, in fact this, well it's true for the whole, for the entire talk, is the group completion, M group. So this is a sheaf of a billion groups. It has a star sitting in it and it has quotient which, well I assume I will look at fine log scheme so it's going to be a constructible sheaf for a tiled apology or if you wish you can consider version of it for the risky apology. In fact in my talk it will be more convenient to consider all these sheaves for the risky apology because at the end the object that, well, yeah, so let's stick to this. So this is, you can think of this, well, a good example of such lambda is just the constant shift supported on the device, for example. So you have such an exact sequence. And okay, so to a scheme and to such an extension I want to associate a topological space. So how I want to do this? So if I have a point, well I can sort of restrict my, so there is here a evaluation map, point x. And so from this extension I can derive an extension of the stock of this sheaf lambda at the point x and this will be m by c star. So this is just a finitely generated a billing group and this is c star. So it's extension. Okay, so now, yeah, it doesn't mean the stock, it means push out. So and here I have map from c star to s1 which is called the argument. So well, let's give a name to this map, let's call it gamma. So a look at all possible sections like this, we'll call it sigma x. So satisfying the property that if I compose gamma with sigma, I get argument. And so x look, yeah, which is the same exception so this is splitting of this extension. Well, push out of this extension. So and then this space as a set is just set of all pairs x and c. Your lambda is torsion free because you assume fs. It need not be torsion free. I will, I'm going to assume, in the second I will assume that it's fs but for the moment I don't even need this. So okay. So I consider all such splitings. Of course there is a map to x and I want to, so and the fibers are fiber over point x is a torsion over from lambda x to s1. So in the case of nice log scheme, a fast log scheme, this is just a real torsion. And so then there is a topology on, so any section of m group defines a function on x log is values in s1, namely it takes point x sigma to sigma of x applied to m. And I want these functions to be continuous. So I consider the weakest apologies such that the pullback of any continuous functions on x is continuous as well as these functions are continuous. Okay, so that's a topological space and I want to see whether, well I want to, the ultimate goal would be to realize this, the homotopy type, well to leave this homotopy type of this log space into a martyric homotopy category. So well, so that the possibility of such construction was suggested to me by a long time ago by Maxim and also by Nori, somehow independent at the same time. So they explained to me one single example which is kind of really the key example. So I want to tell you about this example. Okay, so what is, so this, the log structure, the log space that comes from, as a special fiber of a semi-stable degeneration. So it will be, this will be x0, this will be xt and so semi-stable degeneration. So it's sticked and moreover, so I will assume that it's, there are no like triple intersections, very, very simple. So on my picture there will be only two divisors, it can be generalized in the case of many. So I have these divisors, this is d12 and so then I have two line bundles on x0, namely I have l, well, i which is, which is o of di restricted to x0. So this is, so I have i is 1, 2 and so this li comes with trivialization. If I restrict it, here is another piece of notation, ui is the complement x0 minus di. So and li is, when restricted to li is canonical trivial. So this, these two structures, this, these two line bundles together with such trivialization can be organized, this is given such two line bundles, this is the same thing as given extension of z, z1 plus z, z2 by all star. This is, this is everything over x0. So, so I have one on x0, I consider x0 as a log scheme, well the log structure is just a pullback of the log structure and everything given by this divisor. So but well the same curious, this has this, this very simple description. So these are the constant shifts on d0, on di extended trivially to everything. So and I want to, in this case I want to look at x look and realize it as a element of the, a one homotopic category. So well what you do, so is the following. So well I need another piece of notation. So let's call li-circ. This will be gm-torser associated to li. So li is a line bundle, I take the total space, I remove the zero section. Okay, so and then so what I want to do, I want to consider the following complex, the following complex of, so I want to construct, well, weiwotsky motif and well in fact it will be even element of stable homotopic category and even unstable after a certain modification. So okay, so what is, what is this? So let's first define it as a motif. So well I want to do something very, very simple. I will take l-circ. So again this is, I regarded as a scheme, leaving over x0 and I want to restrict it to u1. Right, so sorry, I think, I want to restrict it to u2 of course because it's 3L on u1. And then the second thing that I want to do, I want to take l-circ2 and restrict it to u1. So when I write square brackets it means that I consider this as a motif. Just a plain scheme, so it makes it. Well and what I will have here is l-circ1 times l-circ2 over x0, restricted to d12. Okay, so and this d12 is intersection. So well, so my motif is going to be sort of the cone of this complex. Now I need to tell you what this is, this arrow. So and here is a reminder, so well this construction is possible because well in the category of wave-wod scheme motives you have identification between punctured tubular neighborhood of a smooth manifold is punctured normal bundle. And namely if you have so smooth y and z, this is closet, smooth closet, then there is a canonical map in from the motif of the punctured normal bundle to y minus z. This exists in differential geometry. This can be lived in the category. And well the map that I have here is exactly what comes from, is precisely that map. So here is the picture. So if I have, let's see how we get it. So okay, so here is my d1 for example and here I have this line bundle which is l1-circ. So and here I have this is d12. So I apply this construction, so I have this line bundle, this GM torsor, it leaves over the whole d12 and inside this the total space of l1, I have l1 minus, I have this fiber over d12. And I apply this construction to y being this GM torsor over d1 and z being its fiber over d12. Then you get precisely this first map and the second map is constructed similarly. So you do this. And do you put a sign or does that mean power? I don't think that there is any sign here, it must be symmetric at this moment. No but if you have several things that intersect each other and let us say go in a loop. No, no, in here. Well so I don't think that there is any sign in this picture so maybe, so let's, it seems to me that, it seems to me that at least here you don't need sign but maybe I'll buy sign, it's possible. So okay, so in fact this map that I used here exists even in a one, just in a one-homotopy category but only after suspension. So you can kind of do the same clue in except that what you'll get, you cannot get homotopy type of x log itself but you can get homotopy type of the suspension of x log. So at least you'll get something in the stable homotopic category. So the fact that this, that map exists after suspension is proven by Morellian way was. So okay, so and well one more remark about this whole picture. So in fact this ELI have additional structure if they come from this semi-stable degeneration. Namely EL1, Tensorel2 is trivial. Well strictly speaking this trivialization depends on the choice of a parameter here. Well I choose a T coordinate, right, because this is OD1 plus OD2 so special fiber, it's lifted from the base. And therefore, well if I have a map from this, from everything, this picture to this over x0 to complex numbers, right? So there is always, there is a map from these guys to complex numbers, from invertible complex numbers, right? So and therefore, so let's give a name to this, let's call it motif of x0m. So it's motif of this log scheme, just definition. And in fact in this case, this motif is not, is a material shift, material shift over cster. And it's fibers for example, well you can take fibers over, so every scheme here is a scheme over cster, so you can take fibers, you have to do this map to get what's called the, well, limit motif, right? So and the picture here is, I cannot really draw a picture of this gluing of this x-log but this x-log here, it actually, it maps to log space associated to the log point. So it maps to the circle. And I can draw a picture of a fiber of this over a point of the circle. So imagine that for example, you have the generation of elliptic curve. So okay, so here is my, so it generates interrational curve with two double points, right? So this is non-singular elliptic curve and this now it degenerates. So what you do, you sort of remove these two points and then you glue along puncture tube learning rules. And then you get something back, well, I'm going to pick it all into the original space. So I want to kind of extend this construction to all the schemes and that even if you have, in this semi-stable case, if you have many components and multiple intersections, it becomes really, well, direct the generalization is really unpleasant because, well, you can kind of write down a similar complex but, well, you have to do it, lift it in somehow on a digital level and then the square of the differential will not be zero. It will be the homotopy equivalent to zero and these homotopes are given by some double normal cone constructions and so it's going to be completely useless, though possible. You can't really do anything with that. So instead, one should look at the dual thing, one should look at the motivic homology of this motive and this has a very simple geometric description. I will show it to you at the end in the case of this semi-stable degeneration but first I want to formulate the main theorem in the kind of in abstract form. So the main theorem is the following. So while I need a bit, first I introduce a bit of a certain category. I will call it the category of log motives but, well, then you will be able, well, we can forget about this. I mean, so first I introduce a category that I will call the category of log motives. So and in order to do this, so let me just write down definition of usual category of what's the motives and then I will list two more kind of relations and get the category of log motives. So well, so I will work with log motives. So I will assume for simplicity that characteristic of my base field is zero. Well if it's p, you can do the same thing but you have to invert p in the coefficients. And then so, okay, so for Waiv-Wolcki motives you do the following. So you consider category of schemes, say, of finite type over a field and then you form an additive category. Just objects are schemes, all schemes are finite type and morphisms are linear combination of maps, no correspondences here. And then I want to take a complices, finite complices over this and then I take, I mod it out by certain relations, by subcategory. So this is a, well, if you can do it on triangulated level, past-ohomatopoeic category, then take verdequation or you can regard it as a differential graded category. So and what are objects of t? So objects, well, of t. So well, I will list them. Let me start here. So first object is, well, it's going to be class of objects, right? So it's x. So objects are complices of schemes, right? So this is first, this is called A1-homotopy. And then the second class of objects is, well, perhaps you have to, let me do it here. So this will, so this is the second class of objects. I will refer to it as CDH-topology. So this is a generating kind of sequences, covers. And so there will be two parts. So first, suppose I have open, the risk to open subset. And I have an ital map. This is ital. And so what are the conditions? One is that the map p in words of from x minus u to u to x minus u, this map p, the restriction, is an isomorphism of schemes. So that means that this is an ital map. And on the complement to u, it has a section. It's actually a bijection, right? So this is the risk to open. This is ital. So in particular, u and w cover x. But it's much, much, much more than that. And in this situation, I want relation. And this relation has the form x, u, w. And here I have the fiber product, u, w, x. So all these maps. And so this is part a and part b. So I want the following. I consider maps p from x prime to x. These are proper. And I assume that there exists a closed sub-scheme here. Such that over the complement, this map is an isomorphism. So p induces an isomorphism between p inverse of x minus z and x minus z. For example, blow up. So OK. And in this situation, I want to have the following relation that motif of x prime plus motif of z. And this maps to motif of x. And here I have motif of pre-image of z. And here you must put the sign somewhere. In those seconds. Yeah, I have to put sign. So either doesn't really matter where. So yeah, you're right. So I have to take the difference. Otherwise the square will be zero. So and right. This closed. This closed, right. So and also, maybe c is that the map from x is used to x. Must be an isomorphism. That's it. It follows from b because you can take z to be the. OK. OK, so but yeah, you're right. So many things in fact follow from others here. So and well, you can. So this is not. So you can have. So this is not what's called, usually called, the category of Weyvotsky-Mozirs. It's also defined and studied by Weyvotsky. And it has a very complicated notation, this category, this quotient. So let me give it here. So it's h a1 cdh. And then here I take z of schemes. And I need Karubin completion of this. OK, so what is the relation of this category with the category of Weyvotsky-Mozirs? So there is a, well, at least there is a functor. So this z to Weyvotsky category, which is obtained exactly in the same way, except that you add transfers. So you start with, consider the same relations. But this category is different. You consider schemes and where maps are finite correspondences. And also only smooth scales. Don't think that if you do cdh the poetry, it makes any difference. You're speaking about the, OK, so you can do it. Yeah, OK. If you would also. So if you consider smooth schemes, then cdh the poetry, then b can be derived from a and other. OK, so now I want log motives. So what I do with log motives. How I, so let's just consider the same, the same similar category. But let's start with log. So by log schemes, so I consider, I mean f has finite saturated log schemes. OK, and then I want to take, so I want to consider quotient of z log scheme by t. Well, and what is, so what is, what is this t log? Well. So t log consists of, it's a full subcategory of objects. Well, I did not object of type 1 and 2, 2 and the following. So I need to do, to impose a bit more, a few more relations. So OK, so what is, what are the relations? So, well, maybe just before we explain the relation. So, I have here this, well, there is homotopy equivalence relation. So in this homotopy equivalence relation, x is allowed to be any log scheme, as well as here. So I want everything in this picture, x, well, this must be log schemes. What about a1? A1 here is just usual a1. But I will add axiom for log a1. And if you speak about closed immersion, perhaps you want the exact one. Yes, yes. So, OK, so what I want is the following axiom. So I have a1 log. So what is it? It's a, it's a, the underlying scheme is a1. And the log structure is given by one point. So that my m group is just corresponds to the line bundle of this point. So, and here I have, so log point, it's kind of the origin. And here I have gm. It has trivial log structure. So, and I want these maps to be isomorphism in my quotient category. But not only this. I want multiply this by x times a1 log. And again, x is any work scheme. So, and also this gm times x is valid. Well, there is one additional axiom. It's strange. It's just has to do, we already observed that, well, all the construction we have made so far depend only on m group, not on m. m is irrelevant. And I could consider the category of law scheme just by the category of extensions. They are in my category t. So, you require them to be 0. That's all. That's all. Yes. You want that a1 log is an a1 log. Is it a morphic or you, too? Yes, a1 log. Yes. Yes. So, so this are object in t. Like, for example, as this, those objects. So, and finally, 4 is very, very strange. Well, so it's very, very, is the following axiom. So, suppose I have any map p from x prime to x. These are log schemes. And the underlying map between usual schemes is an isomorphism. And suppose that it induces, the map induces on m group p up a star to m prime group. Is an isomorphism. Then I want the following relation that x prime goes to x. It must be in my category. So, this kind of way to say that everything depends on the n group. But again, so I could, you can consider just the category of pairs x plus this extension of star. So, now, very good. So, here is the theorem. Now, when you do block blow-ups, what seems when you do the operation of block blow-up, I think that the space, I don't remember now, the space x log doesn't change? No, it changes. OK, space x log, yeah, changes, but it's the same homework. So, here is the theorem. So, well, I have a category A1, CDH. So, this is my category. And here is the name for this. So, this is Z of schemes. So, there is, of course, a functor to this larger category. We are going to consider A1, CDH. And here I have Z log. OK, so, right. So, this functor is an occurrence of categories. So, what does it mean? So, what are, for example, functors from this category to, say, the category of complices? These are homologous theories, right, that satisfy these basic properties. The claim is that any homologous theory can be extended uniquely to log schemes provided it has all these properties. OK, so, let's see. So, in particular, well, of course, here you have map to functor to usual Weyvotsky category. And then you can compose the inverse and get Weyvotsky motif. Now, so, the construction, the proof is, it uses very little geometry, in fact. It's more or less linear algebra. And so, well, first of all, no, you'll have to prove. So, you have to prove two things, that this functor that I have here, the obvious functor, let's give a name to this functor. Let's call it. It's fully faithful, homotopically fully faithful, if it's dg, you think of this as dg functor. So, fully faithful. And b, so essentially subjective. Subjective. Well, and in fact, so easy step is that a implies b. And therefore, so all you have to do is to prove a. So, and that's because, and you can do it by induction on dimension. So, you need to show that any motif of any log scheme is in the image of this functor. So, have enough relations to express a motif of any log scheme in terms of motifs of usual scheme. So, but that's easy. So, you do it again by dimension of your log scheme. So, if it's zero dimensional, then, well, it's fine saturated. So, it means that my Manoid M group is just z to the n. And so, the corresponding, so if I have point and take its, well, its motif, for example, is motif of gm. And if I consider Cartesian product of any of any scheme of log point with itself, then it's, of course, it will be gm times gm. This follows from this axiom. And so, any, and using this property, any zero dimensional log scheme has the same motif as this product of this. So, now, what you're doing one dimensional case. So, you have some curve. Well, now, generically over an open set, the log structure is trivial. It just corresponds to trivial extension. M group is just trivial extension of z to the n by your star. And therefore, OK, so you know what to do over an open set. And then you can use this CDH axiom to see that this log motif of this curve is also in the image. So, what, what, what? Have you still got the formula here? No, I used because, because, so let's see. Well, if you look at this CDH axiom, how it looks like. So, you have some p inverse of z. And then you have z plus x prime. And this maps to x. So, what, so, so I want to show that this object, just, sorry, so that this object, so knowing that this object and this object are in the image and also this, I want to construct, conclude that this object is in the image. But then I need a map from x, from this square bracket x, to this guy. And I need to show that this map exists in the usual category of Wojewódzki, Wojewódzki category. It exists by definition. So, because it's exact triangle, there is a map from square bracket x to this guy, just shifted by one. And I need to show that this map actually exists. So, now, so what you do for? Do as soon as the log stack is a risky locally trivial. I use it as a risky locally trivial, right? So, I could, yeah. So, you don't have motives for et al. Then I need to work with et al, the poetry, rather than be risky, and well, at the end, I will get an et al Wojewódzki motive. Well, some applications that I had in mind have to do with this integral weight filtration, which exists only for realization of usual Wojewódzki motives, not et al. So, OK. So, what you do for A? So, what do you need? You have to, it's enough just by formal nonsense. It's enough to prove the theorem. It's enough to do the following. So, given any object of this category, usual category, A1, CDH, maybe I will just say the following. So, let me, because my time is short. So, I claim that it's enough, it suffices. I will sort of suppress the completely formal part. So, it only uses the fact that the category of Wojewódzki motives is rigid, duality. It suffices to show that the functor home Z of n from this usual category of CDH Z of schemes extends to larger category of this category, A1, CDH Z block. And what you do here, so you just sort of, you want kind of to define motivic homology. It's not quite motivic. It's home in this category with no transfer. So, I don't want to call it motivic homology. It would be motivic homology if I consider this category with transfer. So, you want to do it for any log scheme. And the idea is very simple. So, you have the same group. It can be evaluated, it can be considered as a motivic shift over the underlying scheme. That means simply that you can evaluate it over any scheme which is smooth over this guy. You simply pull the log structure. If I have a map like this, then you can pull the log structure, this log structure here, and take the same guy there. So, and then what you do, you just compute, you take homology of X, these coefficients in symmetric powers. These are operations in the category of motivic shift symmetric powers. Well, they're defined by Weyewodski. They're characterized by the properties that if you take X, that motif of a scheme X, the symmetric power of motif of a scheme X is the motif of the symmetric power of X. Is it M bar group or M group? Just M group. So, for example, what happens if you take N is 1? Well, when you pull at that, you don't make it a log structure or Y, you just take just N. You don't enlarge the units. I mean, usually when you pull back N. Yeah, and the Y units. You are going to enlarge the units. So, for example, what is first, well, motivic homology with N equal 1? This is just homology of M group. It's kind of the final look of the P-carve. So, OK. And now I want to finish by. How much of which category? This is the homology of what? So, well, you are. So, the base of this homology works. Yeah, this is Nisnevich homology of this motivic shift. So, well, I want to finish by a very explicit formula that I promised at the beginning, just very geometric, for the motivic homology of the limit motif in the case of semi-stable degeneration. Also for the tubular new problem. It will be just in the case of semi-stable degeneration. And it comes from this. I just, well, this formula is here. I just want to, I will make it explicit, which is trivial from this definition. So, the formula is the following. So, here is my setting. Well, now I will have, again, I have the log structure coming from semi-stable degeneration. But now I can have multiple intersections. But it's still semi-stable. Maybe that's not a good picture. So, this should be planes rather than lines. So, OK, probably I should not put it here. So, it's confusing. OK, semi-stable degeneration. And it's strict, right? So, I want really, that's, I want log structure for the risk typology. So, strict, strictly semi-stable degeneration. So, I have components, D1, D2, well, so, DI. DI, it belongs to some set. I have this GM torsors. So, first of all, I have LI, which is of DI, restricted to X0. And then I have the corresponding GM torsors. And I have UI, which is a complement of X0 to DI. So, OK, so now I will construct the material homologism. So, well, first I will produce a complex of smooth varieties of Rack Zero. The fibers will be algebraic tori. So, what are these varieties? So, step one, complex, very simple. Complex of smooth varieties. So, this will be product of all this LI. So, this is product over Rack Zero. It's a torsor over an algebraic tori of dimension. Now, what is the next term? So, you take sum. Let's consider it as a kind of, I want to, it's not really, it's, in fact, it's a, I want to consider kind of an additive category. I want to make sense of sum. So, it means that it's, you can think of this dejoint unit. So, of products of I in I, I is not equal to J. And here I will have L Cirque I times UI over R Zero. So, what is this map? Remember that LI is trivial over UI. Therefore, I have such a map. Just given by, well, it's section. Now, it's a long complex. And here is its last term. It's sum I over times intersection J not equal to I UG. It's again over X Zero. And so, all I use here, all the differentials, come from the sections. Remember that LI is trivial over UI. So, this gives me such a, again, this should be dejoint here. So, this is step one. It's very simple complex. Now, form define a complex of pre-sets on X Zero. Well, what you do? So, you call this Z dot. And then you take the following. So, if you have a scheme over X Zero, then, well, the value of my complex of pre-sets on X Zero on Y is the following. You take correspondences or X Zero from Y to this complex. So, well, informally, you just take sections of this complex of varieties over Y. And then, you make it homotopy invariant by applying the system construction. So, let's call this F and my material homology of X Zero M. Z is just CDH homology of this X Zero with coefficients in F. That's it. That's all I have to say. So, there is one, well, as you see, you can ask whether the log geometry here. And in fact, there is no log geometry. You see it's just linear algebra. And the log geometry appears if you want to prove, for example, that if you have a smooth log scheme, then it's motif. It's isomorphic to the motif of the open part where the log structure is trivial. This I don't know how to do. I know how to do it in the case of normal crossing situation. But in general, it really requires some geometry. Thank you very much. Was there any case with notation of the motif homology, did you say? Yeah, so here I defined only this one index, right? This index is absolute value of this set of indices, i. So if you want it defined for larger, for larger, and right, you have to just add in this whole construction empty divisor formula. And to define it for smaller, you don't need to define it for small by cancellation. It suffices to define this for sufficiently large numbers. This follows from the books. Yes. Just motif of homology. Yeah, so if you want it for, you have cancellation. So you can express motif homology of something with coefficients in z of n in terms of homology of the twist, this coefficients in z of n plus 1. This is the product of this GM? Yeah, yeah, product of this GM, for example. Yes? So I just was curious, what is n? Where is n? Shouldn't we have this, if we have a motif homology? Yeah, yeah, this twist here, right? No, no, this n means the log of monotone. Yeah, that's a very good question. It's this whole complex of schemes. It's least over GM. So if you have, it's exactly as in the picture I started with. This gives you the tubular neighborhood. If you want the vanishing cycles, you have to take the fiber over. Oh, OK. So it's a unipotent motif shape over GM. But how can you define the map to GM if you're given just the lid, you're in the fiber also? No, there is a, the map to GM comes from the trivialization of the tensile product of these fine bundles. So each of these smooth schemes over X0 admits a map to GM. For example. Because the other one, the product is. Yeah. I think this is implicit. When I said that you were going to start talking about how to define the weightful phase. Yeah? Does this come out easily from this? Yes. So there is a, well, yes, it comes thanks to work of Bandarka. So if Betty homology or the Ram homology or any Veyvodsky motif is equipped with integral weight filtration. Yes. Dr. Yavt, show that he's equivalent to. Rational. Yeah. To what? Yes. He has defined the weight filtration. Yes. But after, you have to show that he's equivalent to the others. Well, but here you don't have, well, a priori you don't have the others. What you have to show is that the basic homology of this log motif of this Veyvodsky motif I defined coincides with the homology of this, as defined by Kat and Nakayama. But this is obvious because Kat and Nakayama homology theory satisfies the list of all my axioms. And therefore, this factor from log motifs is uniquely determined by its restriction to usual motifs. And for usual schemes, there is nothing to prove. And sorry, last question. In characteristic P, this was everything in characteristic in the world. Yeah, you have to invert P. And always with Zariski? Yeah, for Zariski again. So the only reason why I work with Zariski is that I want Veyvodsky motif really for Nisnevich topology, as opposed to Veyvodsky motifs for Taltopolga. So it is enough to have Nisnevich locally, Nisnevich chance for the work start? Yes. Yes, yes, yes. Though I don't really know. No, maybe that way. There are others. Yes. You need to have your gel. That's not good enough. Any more questions? Thank you very much.
Given a log scheme X over the field of complex numbers Kato and Nakayama associated with X a topological space X_{log}. I will show that the homotopy type of X_{log} is motivic in the sense of Morel and Voevodsky. The talk is based on a work in progress with Nick Howell.
10.5446/20244 (DOI)
So it's a great pleasure to talk in this celebration of Arthur August's 70th birthday. I'm honored to be asked to talk. What I want to talk about, if I see the chalk, there's the chalk. No, that's the colored chalk. That's what we'll do. That was the water bottle. It's OK. I don't need the water bottle. So this is, first of all, credit where credit is due. It's joint work with Jose Boulosquil. And Javier Frezon. And Omid Amini. And also, it grows out of conversations I had with the physicist, in particular Pierre Van Hove and his then student, Piotr Turkin. And one other source of inspiration, in some sense, for me, the most significant, is a series over the years, really, of conversations with Professor Cato. And he has undertaken, graciously, to explain to me the program he and his collaborators have undertaken over, it's a massive program over some years, to understand degenerations of hodge structures. And it's a very subtle and difficult business. And everything I'm going to say is, in some sense, known either to these guys or to these guys. But our attempt is to bring these guys by and large, don't know these guys. And so we want to bring together the various schools. And in particular, the massive and subtle program of Cato and collaborators yields many, many, many invariance associated to degenerations of hodge structures. But only some of them are of interest. And so, sorry? No, I think even Professor Cato, he's probably here, so he could testify. But I think he would even admit that some of them are really artifacts of the external structure you put on hodge structures. And others are completely fascinating related to regulators and related to physics. So anyway, let me proceed. What are amplitudes? See, I wrote down a list of things I want to say here. Yeah, so the point is that, well, yeah, let me say something about how a mathematician attacks physics to begin with. I sort of have to set level here. There's a story I'd like to tell. My grandson was four, he was interested in trains. And so Christmas came and I bought him a train set. And the train set came in this massive box. And it was immediately clear that I made a big mistake. The train set was much too complicated for the kid and too subtle, and he could make no sense out of all the complicated pieces. And I thought, oh dear, Christmas is ruined, he will be invisible. But not at all. Because in fact, what happened was even though the train set was too intricate and complicated and subtle and everything, the box was fantastic with the wonderful pictures of trains doing all these exciting things. And so the whole day was passed in sort of fantasy play with the box. And I think there's a lesson there for mathematics. It's not possible. Physics is too hard for anyone but the dedicated professional physicists to really deal with. On the other hand, physics involves these structures which are completely fascinating mathematically. And it's with that spirit that I want to proceed. So OK, so I want to talk about quantum field theory. And quantum field theory very typically kind of begins with really a metaphor. And this metaphor is what they call the path integral. And it's a big infinite dimensional integral that nobody really knows how to attack. And so there's been developed, one way to attack it is a so-called perturbative way. And this is based on the expansion, a sort of expansion which is again inspired by the finite dimensional case. And it's an expansion over where the index set is a certain collection of graphs. So I write gamma for a graph or for a collection of graphs. And so for each graph, there is a coefficient alpha gamma. And then there is a variable which is raised to the power which is the first homology group, the rank of the first homology group of gamma. So that's sort of the basic shape. And we are interested in alpha gamma. This is the so-called amplitude. So now we're going to cheat, because in fact we're going to write down some integral. We're going to write down a number of integrals for alpha gamma. And by and large, none of them are going to converge. But we won't worry about that, because I won't make any assertion. I mean, there are ways of regularizing and renormalizing these integrals, but that's not our project. We just want to understand the integrals themselves. And in particular, we want to understand the integrand. So let me begin by writing down four different ways to understand this alpha gamma. Let's see if I can get it straight, and bear in mind that I'm writing down things that don't make any sense. That is to say that don't converge. OK. So let's see. The first way, let me change notation here. Let me stick with my notes. I call this capital A. OK. So the first way would write capital A gamma as an integral. Oh, one thing here. We fix an integer capital D, which will be the dimension of spacetime. So r to the d is spacetime. OK. And we give it the Minkowski metric. So in other words, x1 squared minus the sum from 2 up to d of xi squared. So then the first expression for the amplitude associated to a graph of gamma is an integral over, oh, I'll also write, you just general notation, if I have a graph gamma, I'll write g for the homology, the rank of the number of loops. The loop number of gamma. So then the first expression is r to d times g. That's the domain of integration. And then we take the following thing. We take the product over all the edges in gamma. So e of gamma is a set of edges of a certain propagator, which I'll label p sub e. And the p sub e's are quadrics. So bottom line is we get a rational integral. But you see, depending upon the various values of g and d, each of these things has degree 2. So at infinity, I mean, you can see the possibility for divergence and all kinds of complicated things can happen. But at least as an integrand, that makes perfect sense. Let me take a minute to say this a slightly different way. If we want to write down the homology of the graph, we know we have a little exact sequence, let's say with real coefficients, then we can take here the direct sum of, oh, let me take r d coefficients. And then here we can take the direct sum of r d over e over the edges. And then here, let me write it vertically. We can take the boundary map to the direct sum over the vertices from v of gamma. d is the edges and v of gamma will be the vertices. And again, we have r d. But we have to put a little constraint here. We put a little 0 because we know that the boundary, this is just the topological, these are just little segments. And here, this is the boundary, the two endpoints of the segment. And we know that the resulting element here, the coefficients sum to 0. So I put a 0 here. And here we have various projectors, which I'll just denote by e dual. So if I take an edge, little e, I can project this direct sum onto that particular factor. And then here I can take my Minkowski metric, and that gives me a map to r. So I get then, for each edge, I get then a function on this vector space. And I can then restrict those functions to the fiber over a given, so now it comes in an important additional structure. I give myself inside here a point in this big vector space, which I call p. And this is the collection of external, what's called external momentum. And so I basically can rewrite this integral as an integral over the inverse image of a given external momentum. So this depends, in other words, on the choice of external momentum. So here I should make this a, depends on external momentum, of this d, d, g of x. And then again, the product of these pE, where pE now is this function. Is the gap connected? Yeah, let's assume that gamma is connected. OK, so that's the first expression for the amplitude. But there's some others. Let's see if I shoot this really far. Then the second one, sorry, 2, a gamma. There's a factor here. I think it's n minus 1 factorial. I'll write n for the number of edges of gamma. So n will be a fixed notation. And then I'll also write sigma. We'll play an important role. It's a simplex. So it's the set of all, so it will be contained in p. I'll take a projective space of dimension n minus 1, which I'll think of as having homogeneous coordinates labeled by the edges. So they're n edges. And so the corresponding projective space is dimension n minus 1. And sigma will simply be the locus, simply be the locus where all the TEs are non-negative. Of course, one of them at least has to be non-zero because it's a projective point. OK. Is this where the real Grassmony comes in? The real Grassmony, thank you Lucky Stars, doesn't come in. But if it were to come in, it would come in here. Yes. Anyway, the second expression then becomes an integral over r dg cross sigma, a sort of product chain. And here we have d dgx. And then we have omega. So I'll write omega for the standard integration form of integration on projective space. It's not really a form on projective space. It's sort of because the homogeneity doesn't work. So it's a sum over plus or minus. I'll have Te and then I'll have d Te1, which d Te leave out, which d Te enters. So the standard form. So I put this omega here. And then to make the homogeneity work, I take downstairs, I take the sum, I take the universal quadric. So those Pe's were quadrics. So I take the sum, indexed or labeled or multiplied by the homogeneous coordinates Pe. And I raise it to the appropriate power, which is just n. OK. Now the passage between these various integrals is done by sort of standard tricks. And these standard tricks, depending upon the graph, are probably completely illegal because they involve exchanging orders of integration in divergent situations. So you have to be very careful about that. But just again, to just see the shape of the integrand, we're not going to worry. So the third guy involves some extra data, which we'll need to work with. And so it has the following shape. A gamma, again, will be a certain constant, which I've written down here, but I do not guarantee that I've got it right, over, again, it's n minus 1 factorial. And now we just have an integral over sigma. And here come two polynomials, the so-called first and second semantic polynomials. So the first semantic I call psi, it gets raised to a power, which if I've got it right, is n minus g plus 1 times d over 2, again, times this form, this integration form omega, divided by the so-called second semantic polynomial. So I call that phi gamma. And that raised to the power n minus gd over 2. And that's it. So here psi is the first semantic, and phi is the second semantic. And I have to tell you what those things are, but let me postpone that for a minute. Notice that two and three really are, live, an algebraic geometry is comfortable with these, because they're rational forms, and we're integrating over certain chains. And so if the answer makes any sense, it should be a period, the kind of thing that one is used to dealing with. The fourth expression is something that a physicist is comfortable with. It's a sort of a toy, well, let me write it down. You'll see what I mean. It's 1 over again. There's a constant, which I don't guarantee, but I seem to have written 4 pi squared i to the whole thing to the Gth power. And now I have an integral, but now I take sigma twiddle. So I should say, I told you what sigma was. Sigma twiddle is sort of the affine version. So it's just a product over r greater than or equal to 0 indexed by the edges of gamma. So it's the thing, it's the cone over sigma. So sigma twiddle is the cone over sigma. So this is going to be an affine integral. And this is not algebraic, geometric. So here we come with the exponential of these same fellows. Now there are no exponents. It is the second semant sick divided by the first semant sick as term in the exponential. And then I just take, as my form of integration, I just take d Te over all the edges. And I have to divide by the first semant sick psi gamma to the d over 2. So I mean, the most interesting case is when d is 4 and then d over 2 is 2. So this is sort of a toy path integral itself. You see, because what is this sigma twiddle? I have my graph gamma. So here's a stupid graph. Gamma. And so sigma twiddle is the space, or I can think of as the space of metrics on gamma. It's just assigning a non-negative number to each edge with the possibility of degenerating to 0. So this then becomes an integral over a space of metrics. Now one of the typical versions of path integrals that occur in quantum field theory is integration where the domain of integration is the space of paths, but not on a graph, but rather on an interesting Riemannian manifold. So here, there's kind of a toy version of such a thing. But OK. And so we want to do algebraic geometry. So of course, we want to forget that guy and work with one of the others, either one, two, or three. Actually no. In fact, the algebraic geometry we want to do involves this guy. So let me go on. Now I have to tell you what these polynomials are. It's kind of easy to say, and I'm short on time, so let me say it quickly. If I have, I think of it in terms of configurations. So configuration. So I have some vector space, h, some finite dimensional vector space, which I think of as is given as being inside a, it really doesn't matter. Everything is algebra, so it doesn't matter. I just take a field k and I put it inside some vector space with a given basis. And when I do that, then for each edge, I can project off onto the corresponding edge coordinate. And I write e dual also for the composition here. So e dual then becomes just a linear functional on h. And so e dual squared becomes a rank one quadratic form. And so it makes sense to look, I can think of it, if you like, I can think of e dual squared as a map, if I want to do it canonically from h to h dual. And I can look at the sum, t e times e dual squared. And I can cheat a little bit. I mean, there's a choice. If I look at the determinant of this expression, it's not quite well defined because it's sort of, I have to fix a basis. But changing the basis doesn't change. See, I've put in these variables and I really care about this thing as a polynomial in these variables. So I will call this thing psi of h. This is the first semantic and it's well defined up to a scale. And of course, the particular situation we're interested in is where h, I take h equals h1 of the graph, which sits inside, well, with k coefficients and sits inside k to e. And so then I get, this then yields psi, what I call psi gamma, which is a homogeneous polynomial in the edge variables. Now, the second semantic is slightly trickier, but not much. If I look at the h, well, let me do it in general. I have h contained in k e, some labeled vector space. And let me write w for the quotient. And let me fix a section, call it tau here. So then if I have h, and if I take, for any, for all w and w, I can look at the vector space, not the vector space h, but the vector space h plus tau of little w, the line, I add on the line span by that guy. But I can look at the polynomial I just constructed. I guess this should be a subscript. So let me write this way. Let me write h, so w equals h plus the line span by tau w. But let me write phi of t e and little w will be by definition psi of h sub w. So it's a polynomial then, which is not of degree g. So remember, in the graph situation, h is a vector space of dimension g. I've added on one line, so it actually has degree g plus 1. So put it on the next board. So the second semantic, phi of, I don't know what I want to say, phi, which depends on the t e, but it also depends on, I should have called this something else, but let me, in the case of, we're interested in, it depends on the external momentum. So the quotient w here in our situation is the space of external momentum. So this is homogeneous of degree g plus 1 in the t e and of degree 2 in the external momentum. Now there's a tricky point. I mean, it's not that tricky, but there's a point here, for us, we want things to be in Rd. We want to be in space time. What I've described here is a kind of a linear, I mean, the p here is not in space time anymore. So what we have to do is we have to couple psi or phi to Rd with the Minkowski metric. And basically, I'm not going to go through that in detail. Let me just say a word about how that works. If I have a matrix, where does it go here? Yeah, if I have a, let's see if I can find the, I wrote it down here. If I have a matrix that looks like this, M, and here I have a W transpose and here I have a W and here I have an S. So this is G and this is G plus 1 and similarly here is G and here is G plus 1. So the W's are then row and column vectors and S is just a scalar, just a one by one. Then I can. Yeah, so this will be symmetric, yeah, sorry. So M symmetric. Then there is a classical formula for the determinant. So if I call the whole matrix, let's say, call it B, then the determinant of B is something like this and depending on the parity of the day I do the computation, there either is or is not a minus sign here. W transpose, I take the adjoint, adjoint matrix of M and I take S times the determinant of M. Or if I can write this differently, I can write this as determinant of B divided by determinant of M is equal to minus W transpose and here I put M inverse because I know that adjoint matrix divided by the determinant. Is it not quadratic in W? Sorry? Yeah. Sorry, I left out, I left out, sorry, sorry, sorry. There's a W. There, sorry. W transpose M inverse W, what am I trying to say? Yeah, plus S. Something like that. Now notice, you see what I can do, what I want to do is I want to couple W to this space time. And so I have to then reinterpret this thing wherever I see, so this is going to be, as Christoph points out, there's going to be quadratic in the entries of W. So wherever I see a quadratic expression in W, I replace it by the Minkowski quadratic form on those two variables. Okay? So from this point of view, it's kind of easy to see how to couple, so you want to couple W to Rd with the Minkowski metric. And in that way, using that technology, you can get your second semantics, T, E, and P, to work with P in, well, it's in Rd, Rd index by the vertices, comes here. And it's quadratic, so this thing is quadratic in P and of degree G plus 1 in T. So these are the two configuration polynomials, which are classic, and play a central role in the whole game. Okay? So now the situation is that this, I started out there with a sort of generating series indexed by graphs. And this generating series comes from this sort of metaphorical object, which is the path integral. But that whole process is extremely unconvincing to mathematicians. It literally makes no sense at all. And so that's, in my abstract, I talked about a C of physics. So that's the C of physics. The question is how to get across that C without indulging in sort of fantasies that are difficult for a mathematician to understand. And I don't know the answer, but let me, there is a surprising game that can be played, and so I want to explain that. Okay. So this is the basic setup. Now I want to move to the geometry. And the idea is going to be that our graph gamma, so we start with our graph gamma, but we interpret gamma as being the dual graph to a stable rational curve. You've got to be a little careful. I need to assume maybe that the vertices are all at least, have at least three edges. So there's a small constraint on gamma to get stable. But that's in fact not a big deal. In fact, you can move beyond that. So let me remind you, this is again a familiar game, but let me remind you how to play the game. For each, see how does it work, for each vertex in V gamma, we associate a Riemann sphere, where we take P1 and x by V. Okay. And if we have an edge, and the boundary of the edge is let's say V and W, then we glue P1 V and P1 W at a point. Now we have to be careful. I'm not claiming that there's a unique, there's a moduli here. If, if a given vertex has four or more edges attached to it, then we will have four or more attachments to the corresponding P1, and so there will be moduli. So I don't claim that this is unique, but just do it somehow. And so this gluing gets us, so this process yields a curve, this stable rational curve, which I call C0. So this yields C0. Is everyone familiar with, with that game? So it's stable if normally if each vertex is at least three octaves. Yeah, yeah. If it doesn't have, I mean, but this is not, not a, a real difficult issue. We can deal with semi-stable situations as well. So we can then look at the reversal, we want to look at the reversal deformation of this, of this C0. And as I say, C0 itself can have, can have moduli. So the picture that we get is something like this. Let's see, can everyone, I know there's a shadow effect. Let's see if it's okay. So the picture looks something like this. We have, and I want to think in the analytic category. So I'm drawing, because I want to do topology. So I need some, an algebraic geometry would tend to do formal things, but I want to work analytically. So we have a family then of curves over a space S, and S contains a sub-variety, a closed sub-variety, which I call T, and then I can pull back CT. So we have a thing like this. So let's see, S is, I don't know, it's open. Essentially C to the 3G minus 3. And eventually I'm going to add more parameters. So I'm drawing an S plus some more parameters, but for the moment you can just think of it as 3G. The more parameters we're going to need to deal with marked curves. So this is going to be a family, family. And G is the genus of the graph. G is always the genus of the graph, which is the same as the genus of the curve. I should have said that in fact C0, so the dimension of H1 of C0 O C0 is G, and that's also the dimension of H1 C0. So what do I want to say? Yeah, so this is the so-called Versal deformation. And the T, there will be divisors. T will be an intersection of divisors, D, E. Yeah, it's G, which is the same, sorry. Yeah, well, C0 as well. Okay. Capital N, you're not the number of... Sorry? Capital N. And in capital N, I'm going to tell you in a while. This capital N, you mean? Yeah. That's going to account for, we're going to have to put punctures, or not punctures, we're going to have to look at marked curves. So these will be extra parameters for markings. Okay, but for the moment, we're not doing that. We're just understanding the geometry here. So the point is that T is itself an intersection of divisors. So these are divisors corresponding, so we can think of S as its diversal deformation. So if we fix a crossing point, and remember there's one crossing point for each edge, if we fix a crossing point and we look at deformations of the curve where that crossing point stays crossing, so to speak, other crossing points can open up, but that one stays, that defines a divisor. So DE parameterizes deformations with the... I didn't give it a name, but I should give it a name, let's say, with CE fixed, or let's say fixed, so the curve, other crossing points can open up, but this one stays fixed. Then T is the intersection over all these divisors. This is a divisor because I'm just putting one constraint on my deformation. Okay? All this is unobstructed by the classical because we're dealing with curves. Okay. So now let me draw a picture. Now we have to come up with a... So I'll draw a genus two picture, and my artistry is not very... I apologize. No. Already I've messed up here. This one has to come down like that, and then I draw the same thing again here. Shoot. I have difficulty. So imagine two sort of curious shaped objects kissing each other in three places. So this is C naught. And what is it? C naught? Well, it's... the graph should have three... let's see, two vertices and three edges. So that one's easy to draw, gamma is just this. Okay? Two vertices and three edges. But when I look at the versatile deformation, I just... it becomes the usual genus two picture here. And so what have I done? I've squeezed some vanishing cycles. That one and that one and that one. So these are vanishing cycles. Okay? And so what I've drawn here is C naught and let's call this C... let's say S naught for some base point. So I'll take a base point in here, which is away from... so S naught is not in any of the divisors. So it corresponds to a curve with no singularities. And then I degenerate. So that's a classical well understood picture. And some remarks. First of all, we have a specialization map, which we call SP, from the homology of the general fellow. So we call it CS naught, that's the smooth curve, say with rational coefficients, to the homology of the singular curve. And it's... it's surject. Okay? That's... that's... first of all, the existence of such a map is classical. It amounts to the statement that in the fibers here, if I take my singular fellow, I can always find a tubular neighborhood, which is a deformation retract. And then I can imagine my smooth guy living in that neighborhood. So I can map the homology of the smooth guy to the homology of the neighborhood, and then via the deformation retract, it maps to the homology of the special fiber. So that's a classical game. And the fact that it's symmetric, you can just see it because you're looking at closed paths in here, and they lift to close paths... to paths here, which end at the vanishing cycle, but then I can just go around the vanishing cycle a little bit. And it just amounts to saying the vanishing cycles are connected. So it's easy to see that it's a surjection. Okay? Now, if I... the second fact... so this is the first fact that it's a surjection. The second fact, if I write A to be the kernel of the specialization, so that's the space spanned by the vanishing cycles. Notice the vanishing cycles are not linearly independent because when we have two irreducible components, we get a relation between the vanishing cycles. So the vanishing cycles are not linearly independent. Here are three of them. But the A is the space spanned by those, and the assertion is... what's the assertion? That A is the... is a maximal isotropic subspace of H1, or H lower 1 of CS. Okay, now there you have to think a little bit. You have to convince yourself, first of all, it's isotropic because if I get close enough to here, then the vanishing cycles are going to be forcibly separate from each other because they live in little neighborhoods of the points, and the points are separate. So the vanishing cycles are clearly separate from each other, which means that the pairing between two of them will always be zero. And the fact that it's maximal isotropic just simply becomes... because this is surjective, and this has dimension G, and this has dimension 2G, then A has dimension G. Okay, so the reason is that just the dimension of A is equal to G by 1. The quadratic form that you see on the first H1. Say again? The quadratic form, which is... It's just the intersection, the alternating... it's an alternating form on H lower 1 of the curve. It is physically just intersection, yes, with orientation. So let's... I still have enough time, I think. Yeah. So I now want to talk about the Picard-Levchitz transformation. So 3 is Picard-Levchitz. And the Picard-Levchitz is like this. If I write Ae for the vanishing cycle associated to E in E, remember that these guys indexed the points here, the bad points, the singular points. And so for each of those, I have a circle that is contracting to it. So that gives me the vanishing cycle, which I call Ae. Then the Picard-Levchitz says that if I look at the effect on the homology, I get by winding around... So if I wind around De, the divisor De, associated to that particular E, then what happens is a general one cycle B goes to B, and there's an issue of orientation, but let me say plus B, the intersection of B with Ae, Ae times Ae. There's an issue... You've got to get the sign right. Is it minus? Luke says minus. So there is also a conversion of pi1 of the circle. Yeah, I mean, we have to figure out which way we're winding here. Yeah, so it's really... It's not an important point. So I'll just say plus or minus. Okay, so this is a classical familiar fact. So if I call this transformation, let's really call this Le of B, so the linear form, then we know that Ne, which is Le minus the identity, which is also the log of Le, and it just sends so Ne of B is just the intersection number of B, Ae times Ae. So this is all, again, familiar stuff, and the Nilfoten orbit... Can you use a black hole for the right? There's one more black hole. Yeah, but somehow I'm into the logic of putting it on the big blackboard here, so let's see if I can do it without covering anything up here. If I bring this one down... Let's see if this works. Okay. Let's see if I can do this. Then the Nilfoten orbit, so the Nilfoten orbit is just the collection of all endomorphisms of the form sum over the edges Te, so some non-negative real constant, times this Ne. So we maybe call this N. It's just a collection of all these things. And it's easy to see that in fact N is nilpoten. It's not perhaps quite obvious, but you have to think about it a little bit. Each of the Ne's are, I mean, not just nilpoten, it's in fact square zero. But any people? It's square zero? It's certainly nilpoten. Yeah. It's certainly nilpoten. Let me not get in trouble. It's kind of product of the different... Yeah, because you see the physically, again, it's this issue that because the vanishing cycles are all physically disjoint, they don't... If I apply... So the product of different ends is zero. So the product of different ends is zero. It's actually square zero, yeah. Okay. That's the kind of thing that's difficult to think when you're on your... It's our page also, which space? Yeah. This is exactly the point. I'm glad you said that, because this is exactly the point I was going to say. These are nilpoten as anamorphisms of H1 of the smooth CS0. But there's another way you can think about these things, which is to, as follows, you can write H1 of the graph. That is the same as H1 of the singular curve, C0. And that's isomorphic to H1 of the smooth curve, modulo this A, which remember was the maximum isotropic subspace spanned by the vanishing cycles. And NE, any one of these NEs kills all of A. So it gives a map from this quotient. And of course, the image is A, which is isomorphic to H1 of CS modulo A dual. All right. I don't claim these things are maybe spring to mind, but this is then isomorphic to H1 of gamma dual. OK. So for each edge, then, we get a map from this G dimensional vector space to its dual. That is, we get a quadratic form. And the nice fact, which is not hard, but it's a little exercise, so I'll call it a graph, is that this NE is equal to, and I think I screwed up by not giving it a name, is equal to ME, which was, so remember, I had H1 of gamma inside, let's say, R, the edges of gamma. And then for each edge, I could project to R, and this gave me a functional that I call E dual, and ME was associated, was simply associated to the quadratic form E dual squared. And the proposition is that this symmetric, I mean, well, you can think of it as a symmetric matrix or a quadratic form is the same as this one that comes from the geometry. It's not hard, but it takes a little effort. OK. So with that in mind, we want to relate the first and second semantics, polynomials, which came out of just an abstract discussion of the graph, the linear algebra associated to the graph, to the limits of heights on these family of curves. Now to talk about heights, we need to, oh my god, we're out of time. So I'm going to go five minutes over with the chairman. OK, the chairman can start his watch. OK. So I need to talk about heights. So let me at least say enough to make the statement. So I consider now, let me draw the picture. Maybe a good way to do here. So here is my space S. Here is the bad fiber, and here is this, here is C naught, the singular curve. And so then here is a smooth curve. This is the point S naught. So what I want to do, say again. About zero is disconnected. No, no, no. No, no. It's connected. So above, so I want to add parameters. And when I add parameters, that will enable me to enrich the picture by adding some sections. So I will add some sections here. So these are sections. So I have sections. So sections. And what do we call sections, let's say mu, mu i. And then I control very carefully how the sections meet at infinity. So what data do I have here? Suppose that this section meets this component. Well this component, remember, is, remember the components are indexed by the vertices. So this is the V component. So for V, vertex of gamma. So what I do is two things. I want to get, I want to deduce from this collection of sections, which I think of as a family of zero cycles. In fact, I think of as two families of zero cycles. I want to deduce external momenta at infinity. Now external momenta are linear combinations of the vertices with coefficients in Rd. So what I have to do in the first instance is I have to couple these sections to the vector space Rd. So I couple the sections to Rd. Now what does that mean? It doesn't make any sense. But what I'm interested in is the height. So I write down a sum. I don't know what that was, but whatever it was, it's gone forever. I write down the height, which is a sum Rv, let's call it mu, mu, what do I want to say, Ri, let's say mui, where the mui's are the sections. So let me call this, let's say, A. And I'll take another one, let's say, Ri prime, or J prime, mu J, and I'll call this, let's say, A prime. And I look at the height, which is A and A prime. And because I want these things to yield external momenta, I have a constraint that the sum of the Ri's should be 0 and similarly here the sum of the Rj primes equals 0, which is perfect because to talk about a height, I need to talk about zero cycles of degree 0. So I don't have time to explain this idea of coupling to a vector space, but it's not hard. Once you know how to define heights, it's easy to see how to couple it to a vector space. And then the theorem, yeah. Sorry, is it some sort of arithmetic thing? Yes, I mean it is the, no, no, no, no, no, this is the height of, if I have two zero cycles of degree 0, which are disjoints, I have always defined, essentially you can compute it by taking differential forms with log poles on A and integrating over chains here. So this is the classical height. I'm sorry, I did my time badly. But the theorem is going to be that the, if we look back, and if I preserved it and I defined that the e exponential of i psi, phi over psi, is the limit as a certain parameter alpha naught, which I didn't really have time to explain, as alpha naught goes to zero, which essentially amounts to saying if we look at the structure of the base S, so the base S we can think of as a product of copies of GM, so like punctured disks. And if we imagine the parameters as all going to zero, then this is what this alpha naught is saying. And this exponential here is a limit of exponential of i height a, a prime, where a and a prime are both zero cycles corresponding to the given external momentum here. So this is a function of external momentum, so a and a prime are zero cycles corresponding to a given p in the sense that I explained, that is they say they cross the right vertices with the right values of the space time at those points. So the bottom line is then that the term that occurs in the amplitude is a limit of heights. Now say again. Alpha zero. So alpha zero is measuring, I don't really have time to go into detail, but alpha zero is measuring how we are approaching the, the, the T. So remember we have S and we have T. So I'm thinking of S minus T as being something like a product of punctured disks. And alpha zero is measuring how fast we are approaching the punctures. But the divisor normal crossings. Yeah. So we have the normal crossings and we are approaching simultaneously all the, all the parameters at a, at a given speed. I'm sorry. This is on the web and in, in archives so you can get the details. But I think I'd better stop. So when you have this multi-section, this several sections, so they cut each, each component in certain, right. What is the relation of this to what you explained for, those numbers are related to, how they are related to what you have. Yeah. So, yeah, exactly they are. Let me just say a word about that. Is it okay I put it over here? Remember everything is a function of the external momentum which are in R dv0. And v indexes the, the, the p1v's, the components. Okay. So for each v, for each little v in v, I give myself a section. Okay. So I have a section, let's call it mu sub v. This section meets pv, but it's, but that's it. It just meets pv. Okay. So it's, it's, it's this guy. But you have done several rounds that meet. Yes. Because, yeah, that's a good point. And again, I'm sorry I ran out of time. There's a technical point about the height which we need to assume that the, the zero cycles in question are disjoint. Right? So what I would like to write is a, a. But that doesn't make any sense. So what I do is I take an, an a prime which is also meets there with the same, with the same thing. So that's the, that's the idea. Okay. Yeah. Yeah. So I have a question, so what is the relation between the height and the green function? Because normally the way I would think about it is that I would have a function on the remand surface between two functions that have the green function. Yes. And I would take the degeneration limit the way you design it. I mean roughly speaking they, they are the same. They are the same. The height is the greens function. There are in fact, I like to think of it in terms of heights because the, the philosophy here, what the, what the mathematician wants to inject are these height structures. And the heights are typical examples of real valued functions associated to variations of a height structure. So this should be a, a, a general game when we have a degenerating variation of height structure we should be able to get interesting amplitudes as integrals over the nilpotent orbit associated to this, this variation. So that's, that's the idea. But you're right. I mean, that's why I say that you guys knew this, but you maybe didn't think of it from the point of view of variations in a height structure. That's, that's, and you always need one parameter at a zero? Yes. You don't want, yeah, this is actually, Professor Cartot can be more precise about this than I can. You don't want to let the various edges go at different speeds because then that way can be, if you, if you choose the, the different speeds badly, you can get the limit can be not what you want. But that's interesting because there's precisely what's screen-faced and you, as you always have a prime. Yeah. I, sorry, I should have called in fact alpha prime. Yeah. I should have called it alpha prime. But it is measuring the, the approach. We're proving that swing-fair has only one dimension full parameter that is needed, in a sense. In a sense. I mean, it's the fact that you need one parameter is very striking. I'm still working with the box. I mean, I, I. I mean, you're not using time. If you set this big dimensional S, you can use to one dimensional situation. Yeah. So you're not this man, so you move it from, from gamma to. Yeah. So you get the negative sign. Well, it's not. I mean, it's a rank one. One dimensional, yes. You get the curve, the semi-stable reduction. Yes. And this what you get is just what's important called monotony bearing. Yeah. So here you work in the Q coefficient. Yeah. The, what can this look like? Yeah. Closely at the, the integral part. Mm-hmm. So then you have an isogenic. Mm-hmm. So the cooper on this map. Mm-hmm. Is exactly the group of connected components. The neural model. The special fiber of the neural model. Yes. So I wonder if this is the, the, the, the, the, the, the, the, the, the, the, the, the, the, the neural model. Yes. So I wonder if this enters into your, your picture. Not yet. Not yet, but, uh, uh. And also I should say, um, coming to heights. Mm-hmm. So, what do you get the conjecture that you have more generally than the ambient variety A and A is not the same, yeah? It's your A prime. So we have the, the bearing between those two connected components. So two, uh, an ambient variety, semi-stable reduction. And the ambient variety is very, any conjecture in the bearing is perfect. Mm-hmm. So in fact, this was proven recently by, I think, a student of, uh, Cato. Mm-hmm. And, but previously there was some work of Bosch. Mm-hmm. And Bosch considered, uh, so, proved it in some cases. Mm-hmm. And precisely using, uh, heights. Mm-hmm. Buries. Mm-hmm. So I wonder how this, uh, well, let me just say that. Yeah, I think it also appeared in the, in the lead, but, uh, yeah. So, I wonder if, uh, this story, this final story has any significance to, or relevance. We have to ask the, ask the physicists. Uh, from my, my point of view, uh, I, uh, notice just one, one remark that is suggested by what you said. That this is, uh, absolutely the most degenerate. We're going to the, the worst place. Yes. Why, why should physics be preoccupied with the worst, worst place? Is there, is there any physics to be found by just degenerating, uh, a little bit? Yeah, yeah. You know, and, and so, uh, uh, there's lots of possibility for interplay between the math and the physics. Uh, but just to record, I think, too, that the one dimension, the, the dimension of the, the, the dimensions. Well, here we need, we need, uh, notice we need the higher dimensional situation because we need, ultimately, our integral is going to be over the full nilpoten orbit, not, not just over one, one, one guy. Mm-hmm. But, or why do you want dimension? It is quite fascinating. It took me questions, uh, took me questions. Um, here's the, was it the ball of the musseless, so? Yes, yes. I'm sorry, I should have said this is all massless, but it's a nice, it's a nice question. What happens if we put masses in? I haven't thought. And second, uh, can you make a conversion of this for open strings when you have oriented final graphs and you replace vertices by disks and edges by strips? Edges by strips. Well. That's open string, sir. Climbing the mussel system. Yeah, yeah. Uh, I could repeat, but I'm not sure, I'm not sure it would be correct. Uh, not by me in any case. Uh, yeah, not by my group. No question. What do you say? I'm sorry, why, why you said it's a good thing to deal with this one? That because it gets rid of all these intervals entirely? Was I a singular problem, Akane saying? Well, Akane said a lot of things. Um, uh, let me, I mean, let's just say that the chain of integration is replaced by irreducible components of the real Grassmoney. End of story. For me, I mean, I can't go further than that. So we suggest we go again? Yeah.
Feynman amplitudes constitute a beautiful little island of algebraic geometry surrounded by a sea of physics. Ancient AG's marooned on the island cannot help but feel skeptical about the seaworthness of the transport physics offers from the island to the shores of reality. With the advent of string theory, physicists understand another approach, realizing Feynman amplitudes as suitable limits when the string tension goes to zero. This talk will give an algebra-geometric interpretation of the idea. The Feynman amplitude becomes an integral over the space of nilpotent orbits at a point on the boundary of the moduli space of marked curves. The integrand is a limit of heights of cycles supported on the markings. This is joint work with José Burgos Gil, Omid Amini, and Javier Fresan.
10.5446/20241 (DOI)
Спасибо. Это огромное удовольствие и удовольствие, чтобы участвовать в целибрее. Это разумеется оения Taiwanese Jewishaffe. На самом деле, есть такойcież meter company. Понятно, что у нас здесь again these scalp. greatness. Выérieко, что владельеет Taeoprox Sam outro? В�� вопрос! Вы If I,%. Вот. Вот embaixo. aklный залный stacks worth, %. На фанат bodies р electrolytic scores,итай look. Aim Режит, курс, инвент. Они все изучают более-менее то же. Обжигать от небольших разных угол. Так что это может быть довольно сложным. Я смог бы дать некоторые фотографии в середине lecture, когда я попробовал выгонять вас, но, действительно, мы не заметили. Это очень просто, хотя в данном случае, они могут быть очень... Как я называю, топологическая локация локуса. Все эти нотения будут быть introdu<|ru|><|transcribe|> Так что, мы хотим дать вам какую-то комбинаторию. Для обычных курсов, симистая стабильная редакция дает более-менее лучшую, комбинаторию, симистая стабильная редакция. Для морфизм, вы увидите, что есть симистая стабильная редакция. Это не описывает симистая стабильная редакция. Это хорошо в данном случае, когда симистая стабильная редакция дв avantune dist Eponge В общем, это не очень интересная и не будет раздражаться. Вы не выяснили, что эти мономиолы, в которых, в какой? В координате? Да, в координате 01. Просто у вас есть какая-то функция от 01 до 01, которая является мономиолом из натуральной силы, с какой-то коэффициентой. В общем, это не очень интересная, но я не оформлюсь в таком. Теперь, sources.Different function study in a joint work of my PhD student, and in a coin, and form of post-dactrusion. Separate paper was about profile function. And also there is a short overview of two papers, available now in archive. And well, this lectures also will be at IHS site, and also I posted it at my site. Okay, now the plan. Well, about half a lecture I'll talk about more or less basic results, what is known to experts on Berkowitz geometry, but again, I would like to recall various things. So first of all, we'll discuss Berkowitz curves, then we'll discuss morphisms of curves on what is more or less well known. I cannot say classical, because Berkowitz introduced all this stuff in 90s. I used to say classical, but it's sort of classical for Berkowitz geometry. And then I will talk about different function and profile function in the last two quarters of the lecture. Okay, so let's start with conventions. Probably I'll try to put some conventions here on board, because the only problem with slides is that, I can put very few information simultaneously on board, so I'll try to keep something. Here, okay, so evaluation always is taken real, maybe I'll put in such real evaluation, if it's not obviously otherwise. Also we'll use notation k-circle, this is the ring of integers. It's customary for rigid geometry and Berkowitz geometry, and residue field will be denoted by tilde. Okay, so with notation, with respect to valued fields. Now also we fix k, which is algebraically closed, ground field, complete ground field. And evaluation is not trivial or? Этим Okay, a kinalytic curve x will be called nice, if it is smooth proper connected, in principle one can instead of smooth, only say, a rig smooth and this boundary, for simplicity I ignore this. In the papers one considers slightly broader notion of nice, so for us just smooth proper connected, and then it's just a multiplication of some algebraic curve, which is also nice, it is smooth proper connected. Okay, and f will denote a morphism we want to study, so it is finite of nice curves. Okay, good. Now. Go back one slide. If x equals x, is that what you need to say? If. What is if? In particular, it's a property of such a thing. If it's an aliffication, it's not always an aliffication around a very curved is it? It is always, under these assumptions, it is always what it's written in particular, it is then aliffication. Okay, now. About points of Berkwich space. So I don't want to give general definition, but maybe I assume a little bit with, you often literally a little bit with region geometry. So Berkwich spaces are defined similarly one takes affinoid algebras define some spectrum, which is richer when in region geometry and when glues them, gluing is a little bit subtle like in region geometry. So I want to skip the details, but it's important to mention that points correspond to same evaluations, to real same evaluations on affinoid algebras. And to any point one can associate a completed residue field, a good invariant point unlike ausregimentary where points are just evaluated. And fields here they evaluated in complete fields. So really a good invariant of point is completed residue field. So for a point x in x, we are given a H of x completed residue field. So I think Berkwich consider more general affinoid algebras than in take. So you draw the streak. That's correct, but I decrease generality here from beginning, so we can assume everything is strict. And I'm not going to use this to any detail. So affinoid algebra will not appear in this talk. Okay. For any k-variety, so in principle since in our generality everything is algebraisable, we can only think about the nullifications of some stuff. And it's maybe a slightly simpler way. So for any algebraic k-variety Berkwich voluntarily defines a nullification and the map from nullification x to variety curly x. A fiber of a point z consists just of all real valuations on the residue field of z. And for any such valuation we can complete k of z with respect to this valuation and we get completed residue field of a point. So in particular for any closed point, since k is algebraically closed by assumptions, residue field is just k. So we can put only one valuation compatible with valuation of k. So the fiber is just a single point and this is a classical point appearing in rigid geometry. So we'll call it a rigid point usually. Now in particular if curly x is an algebraic integral k-curve, when said theoretically a curve, a nullification of curly x, consists of classical rigid points and everything else adjusts valuations on k of curly x on field of rational functions. Okay. Now points of k-analytic curves. Okay. So in principle points are divided to four classes. This is more or less necessary for. But okay, let's try to do some other with this. So type one adjusts usual rigid points, which are sort of classical ones. Everything else should correspond to non-trivial extension. H of x over k is a non-trivial extension. So when we are given a non-trivial extension of valued fields, we can ask what extends. If residue field extends or if group of values extends. So type two is the case when residue field extends. So H of x tilde is strictly larger than k tilde. And it turns out it requires an argument, but it's not complicated. In this case, group of values is just the same as group of values of k. I am using that k is algebraically closed in this claim. And the residue field, complete residue field of x, actually is function field of a k curve. So it's transiting as a degree one, and it's finally generated over k tilde. So we'll denote this curve C of x. So x type two, when C of x is the residue curve, curve of H of x tilde over k tilde. Now type three is the case when the group of values extends. In such case, one can show that the extension of residue field is trivial, and the factor of groups of values is just z. And the fourth case is when we have non-trivial extension, but nothing extends. Group of values, no residue field. This is the worst case. It's called type four. Now a small remark. Fortunately, for us, type four points will not be essential at all, so I'll mainly ignore them throughout this talk. The real headache when one tries to prove stable reduction theorem or resolution in high dimensions. Stable reduction in high dimensions. This is really your enemy. But once we have stable reduction for curves, they are not real trouble. Okay, now a fine line. So let's consider the example of how these points look like for a fine line. So take x is a fine line of k and fix a coordinate t. Then it turns out that at any point of type one, two, or three, it means a very simple description, just the norm corresponding to this x, semi-valuation. When we shift coordinate by a, it's just a sort of monomial valuation. It's defined as maximum on monomials. So it's a sort of generalization of classical Gauss valuation. When r equal to one and a equal to zero, it's Gauss valuation. Okay, and in particular, if x is of type one, this happens if and only if r is zero. So this is just classical point equal to something. And otherwise, x is the maximal point, which satisfies the inequality absolute value of t minus a is less or equal r. Because obviously any point which satisfies this should also satisfy less or equal in wet equality. So x is the maximal point of the subset, given by absolute value of t minus a less or equal r. So it's just a disk. So any such x is maximal point of a disk. So points of types one, two, three are just classical points and maximal points of disks. What do you mean maximal point? The semi-valuation it defines on polynomials in t is larger than, larger or equal than semi-valuation of any other point contained in the disk. Okay, now we can parameterize point by a and r. So the question is what is the redundancy of such parameterization. It also has redundancy, it's very simple. The disks must coincide for points to coincide. So non-archimedian disks coincide even only if radius is the same, r equals s, and a and b are close enough. Any point is center of the disk. So we can parameterize point by a and r. So this is center of disk, so this is. And because of this, we can draw a picture of x very easily. For example, for any point a, we have a line of points p, a, r. For any other point b, we have a line of points p, b, r, and they meet precisely at the radius absolute value of a minus absolute value of b. So the whole structure of points of type 1, 2, 3 is just a sort of 3, which probably, even if you have not studied work with geometry, maybe the picture of 3 you saw because it's very easy to draw, so in many reviews you just see such a 3. Now, a small remark, not important for us. Type 4 points correspond to embedded sequences of disks without intersection, classical intersection. So it may happen that we are given some intersection of disks without anything, and when it is type 4 point, I put it in LeBracchus, it's not important, so it's just. Okay. Now skeletons. So let's say with a subgraph gamma of x by a subgraph we mean the following. It's a connected subgraph, whose vertices are only of types 1 and 2. Maybe on this picture I should also explain what are the types. So all these points are type 1. And type 2, 3, so the intersection points, the type 2, the interesting points, are disk-sofrational radii. These are points where we can specialize to different points, to different directions, and something like this is a type 3. So we want vertices of the graph to be only type 1 and type 2. And such a graph is called a skeleton if the complement of the set of vertices is a disjoint union of open disks and open Anouly. Or maybe semi Anouly by semi Anouly, I mean disk, open disk, with a punch. So disk minus a point. Because my vertices can be at type 1 points, I also allow such a creation. So a typical example, take something here, make a cut. And make a cut here. What you have between two cuts will be an open Anouly. Can you also join 0, infinity and p1 through all the count? Нет, я пытаюсь держать это. Вы хотите хотя бы 1 тип 2-поинт. Вы абсолютно правы. Вы хотите 1 тип 2-поинт. Ни с келетон дает хорошую компонентуру, потому что мы не упадали. Во-первых, x без whole gamma. Для любого аналуза, мы имеем центральный корт, как здесь, который коннектует две интернете направления аналуза. Если мы уберем центральный корт из аналуза, мы получим disjoint union of disks. В факт, я не спел, но я не спел. Мы хотим вертизу, как я explained, и мы хотим эти сентральные корт из аналуза. Если мы уберем весь граф, весь келетон из x, мы получим disjoint union of open disks. Керф становится очень простым без того, и это означает, что келетон знает почти все о керфе, хотя бы ни комбинаторийная информация. Ни больших субграфов, ни больших, которые снижаются с этим, это также керф, и очень полезная цель, только для керфов, это просто длинный феномен. Один из этих феноменов можно купить, например, генус из x, как саунд, генус из всех точек, плюс первый бетенамбл из x, или можно только саунд на вертизу графа, и добавить первый бетенамбл графа. Это означает, что граф имеет все точки с 0 генуса, и граф имеет все ряда керфов. Граф знает много о керфах, и это сразу подходит к этому обсерватору, что x без гамма, это дежурный универс, а теперь, по генусу, по керфам, это значит, что мы пролетим. Если у керфа есть точка, то с 2 генуса, то 0 генуса, мы просто не можем подсоциировать, если что-то интересное. Но если это точка 2, когда мы социировали керф с x, то мы можем подсоциировать керф с x, и мы можем подсоциировать керф с генуса. Это значит, что с h1 и x, в каком же кеморде это? С h1 можно подсоциировать с керфом, с интегра... С венком в каждом сфере? Так, как в венком в сфере? Да, как в венком в сфере. В принципе, я не нужен это, но... Ну, это... Ну, теперь сфер, сфер, сфер, это не формуляция, которую ты используешь, но это эквивалент. Это так называемая формуляция сфер, сфер, мы не нужны, в связи с формулами, schlechtы я бы Zukunft связываю я просто бы extrem LiteratureSAOmamirable. В связи с формули raped с формуламиPrank У нас всегда есть редакционная специализация map, от X до клоуфта фибро формы модели, и потом преимущество генеральных точек, от клоуфта фибро, от любых генеральных точек, это просто один пункт тайта 2. Примущество... Нот, это открытый аналуз, и преимущество... Смусплант, это открытый диск. Если вы выживаете сиимистабельную редакцию, вы можете конструктировать скелетон, и, тем более, вы можете, естественно, оборудовать сиимистабельную модуру из-за клоуфта фибро в 2-й бюрквичном месте. И этот сет для вертиксов будет предстоящим и параметрированным по сиимистабельной компонентам из клоуфта фибро. Ок, теперь локальная структура от хороших клоуфт. Во-первых, это может быть обтеканным сиимистабельной редакцией, но можно еще раз в другую сторону, можно и доказать, что это директы аналитических технических. Локальная виска, и когда это не так сложно для глювия, и для доказания стабельной редакции от пиолетов и аналитических металлов. Во-первых, аналитная структура с большим графиком. Это что-то, как это, может быть, с очень много мани-лупов, но мы должны думать о этом как бы как раз граф. Да, я не говорил о том, что это витопология. Это не очень важно. Для интуиции, я думаю, это достаточно. Есть какая-то фенситопология, но давайте обозначим. Ни один тип, ни четыре, лежит на открытом диске. Потому что на скелетоне мы можем только получить тип-1 в вертиксе, или мы можем только получить тип-3. Ни один тип-3 лежит на открытом аналите. Дискрипция тип-2 поцелуй витопология немного больше фенсии, так что витопология, ни один тип-2, имеет много направлений, и эти фенсии предпочтаются по точке витопологии. В фотографии, я поставил, что каждое ромификация выглядит как п1 на витопологии, потому что все генусы здесь 0. Но в генераторских точках тип-2 можно быть более сложными, и мы не должны быть витопологи в п1. Поэтому, может быть, в течение точек, мы имеем какие-то генусы. Ну, теперь еще один базовый факт о кюре. Так что, ни кюре имеет экономическую минимуму метрику, так что логерты из-за функции пище-линирных с логерцами. Поэтому, в этой ситуации, есть минимум, минимума метрику, ее существует может быть предназначено из-за нескольких метров, вы можете использовать стабильную редакцию, и вы можете выучить ее. Так что, это не так, как с табильной редакцией. Теперь, я буду делать, что это экспонентальный метрик, или радиус-метрик. Так что, радиус-метрик является экспонентальным метриком, метриком в ленте в республике метрикам. В итоге, абсолютные функции будут пище-линирных с логерцами, как мы должны думать. Так что, есть какие-то доминирные субинтров, там есть доминирные субинтров, и в смысле, мы не выигрываем эти логерцами, мы выигрываем в республике с логерцами, если мы используем радиус-метрик. Но как это можно быть, как говорится, для керва? Так что, есть какие-то субинтров, что редакция не уникальная, а минимум, как для P1. Так что, если вы хотите... Я не сказал ничего о мне. Ок, но давайте посмотрим P1. Это действительно P1. Потому что, эти логерцами, в четвёртых расрейках, которые хоть rallies, в Naruto, в halfback, на игре. Хорошо, так вот, я расскажу, приближ wants 1, stylist exponential distance from gamma. This will be important in the last part of the talk. So maybe I will put it here. So if we are given gamma inside y, a skeleton, гамма — это дистанция гамма. Так что, в принципе, это значит, что equivalently, мы можем убрать гамма из y, когда все не притягивает к дискам, мы можем нормализировать их, чтобы быть унижением диска, и когда дистанция, в инверсии, экспоненцией дистанции от ворота y, чтобы быть скелетоном, это просто радиус в y, в том числе, дистанция нормализации дистанции. Так что, в том же нормализации, я думаю, это использовалось по Франческою, в его работе на селюциях дифференцией, пиаде-деференцией. Хорошо. Теперь, так и так, это пиктур с керфами, а керф, сейчас, мы говорим о морфизме. Итак, сначала я бы хотел бы introduить в мультиплистичном функции морфизма, да, f от y до x, это морфизм, так что, может, я поставлю это здесь, и f от y до нитрой номер. Так что, для типа 1-ти пиктур, мультиплистичность просто обычная ромификация. Для других пиктур, максимальная идея это тривел, но с полной рествью не так, как с раклой клоской, это может увеличить, так что, уровень этой экстенции, это мультиплистичность от y. Это очень простой факт, что f это морфизм в локализации, а в y, если и только в мультиплистике, это 1. Так что, это очень интересная вопроса, например, чтобы вы описали, что f не морфизм, я скажу, что в локализации ромификация. Так что, мы считаем, что мультиплистичность от y до y до y до x до y до d. Это будет ненадолго. Это будет только�ак ок哥мпиеса, но спек Street we describe this also will say that F is a typologically tame if multiplicity is invertible in residue field. Okay, now fact, for any interval inside y if we consider it parameterized with radius, with radii parameterization, First of all, the image of i under f is a graph. So i and this graph have a natural red exponential metric. In respect to this metric, the restriction of f is a piecewise monomial map. And the degrees of this map, or slopes, strictly speaking, one has to say degree, but I'll say slope. In this example, these are just multiplicities. So if we know function nf on y, actually we know everything about metric structure of the map. We know completely what are the slopes of... So we completely describe the metric aspects of... Okay, now simultaneous semi-stable reduction. So a skeleton of f is a pair of skeletons on y and on x. So such a skeleton on y is a primitive skeleton on x. And the vertices of a skeleton on y contain... The set contains the ramification locals. So in fact, the only reason why I insist to allow vertices of type 1 is that I want ramification points to be inside the skeleton. Okay, now theorem, any finite morphism between nice curves possesses a skeleton. This is so-called simultaneous semi-stable reduction. It's not stronger than usual semi-stable reduction. Why? Because we know that once you can find one skeleton, you can find as many as you want, just in large, and you still get skeleton. So you can easily play with x and y and find a pair, which is compatible. So the theorem is not essentially strong. It can be deduced relatively easily. The price will pay is that it does not give such a good description of morphisms as semi-stable reduction provides for curves. Okay, so this is this argument that how to prove, deduce with theorem from stable reduction. And one can also formulate it in language of formal models. We will not need it. So it's just a size remark. It's more or less equivalent to existence of a finite formal model. Okay, with both curly y and curly x semi-stable. Now, to which extent simultaneous skeleton trivializes a morphism? Okay, the answer is written here. On the complement of skeleton, a morphism is disjoint union of finite italic covers of disks by disks. But such covers can be complicated. That's all the stories. So they can be really complicated. Now, when you exclude the characteristic p-raditial maps because you want ramification locus to be content. Yes, in this theorem, yes. In the end, in the realisation theorem, we allow. Yeah, but okay, I agree with you. But, okay, let's ignore this point. It's minor. You can always split to pure the radical and then, yeah. Okay, description of term morphism. So any topologically term, that is morphism, which is topologically term at any point, italic cover of a disk by disk actually is trivial. So just disjoint union of disks. So, and also for Anulay, any topologically term italic cover of analysis by analysis is cumbersome. So it's given by a simple formula. And it follows that if f is topologically term, then the morphism splits outside of the skeleton. Any action can be only on the skeleton and it's constant along any edge. On any edge, we have just the same power e, the same cummer map. Because any edge is just a skeleton of an Anulus. So this gives very good description of combinatorial description of termital covers. So in particular, actually what we get, we get a map of graphs with multiplisties. And it satisfies natural conditions. I'll run through first two, because they're really obvious. You have some local constancy of multiplisties. So let me not comment on these two, but just sort of very obvious. A little bit more subtle condition is that we also have a local Riemann-Hurwitz condition for this map of graphs. So we are given a map of graphs. Probably. Okay, maybe I'll start here. Map of graph gamma y to gamma x. I know something like this. And for a vertex here, and its image, we can relate the genus of v and genus of u using verification along the edges by usual local Riemann-Hurwitz. And not surprisingly, this thing follows from Riemann-Hurwitz formula for reduction curves. So it's not stronger than usual Riemann-Hurwitz. And, okay. And in fact, these three conditions, where the only numerical conditions of the map of graphs should satisfy, where I lift in results, that if you give a morphism of graph, which satisfies this, that and that, then you can lift it to be a skeleton of morphism of work with curves. Okay. Now problems with the wild case. Okay, as I said, the total cover of these by these can be complicated. Second, the extension of residue fields can be purely inseparable. If it's purely inseparable, we can say nothing interesting about the map of reduction curves. So Riemann-Hurwitz just completely breaks down. Third, even if it is separable, the local terms of E is now larger than E and E minus 1. And it involves different in general. So in case of wild ramification, even if the extension is separable, we have trouble. Okay. And two more examples. The non-splitting set can be huge. For example, and here I would like to draw a couple of pictures. For example, let's consider a map T goes to T to the P over, let's say, CL. It's Cp over there, but let me take here just general L. And then maps look something like this. And it turns out that there is a metric neighborhood of the chord zero infinity. The map is not split, and it splits outside. So this size is absolute value, okay, let me write it just immediately. Now it turns out that this radius R equal to 1, if L is not P, if we are in the same case, but it is number, which is equal to absolute value of P, 2 1 over P minus 1, if L equal to P. So H of X can be explained very easily. You should consider a radius of convergence of P through 1 plus T. This is 1, if L different from P, and this is absolute value of P to P over P minus 1, if L equals to P. So just you try to solve this equation in H of X, and you cannot solve in this radius, but you can solve when you are outside. So we see that in general, the non-splitting set, in non-time case, it can be a huge chunk of a graph, okay. And the situation becomes even more complicated, if we consider, for example, C2, mixed characteristic 0, 2, and we consider a double cover of P1, F lambda from E to P1 over K. In such case, we have four interesting points, infinity lambda is 0, 1, and it turns out that if absolute value of lambda is larger than absolute value of 1 over 16, the number which is written here becomes absolute value of 2 squared, in our case. And if this thing of exponential radius 1 fourth does not touch this locus, then the situation is relatively simple. We know that we have two disjoint parts. Here we can extract root of T minus lambda, here we cannot extract root of T minus 1, everything else is split, and above this interval which connects the two non-splitting regions, we have two pre-images. So, what we get, here we have something like what I draw. So, what we get is a bed reduction curve, and one can even compute the length of the loop in terms of absolute value of lambda. So, this happens if absolute value of lambda is larger than 1 over absolute value of 16. But if, okay, I don't have time to explain what happens if we are on the border, but it's easy to see, okay, let's say, if these two guys touch, if just they meet at a point, when instead of this, you'll just get one point, and this is ordinary reduction point. Now, but the interesting thing is when absolute value of lambda is less than 1 over 16, this is equivalent to the fact that absolute value of J is less than 1, because there is scaling factor between J and lambda, and this is equivalent to J tilde equals 0, that is, the reduction curve is super singular, super singular reduction. And in this case, just by hands, one can compute the following situation, lambda infinity 0, 1. There is still the same locus around 0 infinity, the same locus around 0, 1, but at the touch point, there is, first of all, there is some direction, I think something like square root of lambda, in direction of square root of lambda, maybe, not completely certain, where you'll get a super singular reduction point. So this point has genus equal 1 and super singular reduction, and this distance, or distance from this point to the two axes is smaller than absolute value of 4. So this point is located much closer than you would expect. You have a meeting of two things, and suddenly they cancel one and other. Okay, so this picture is definitely a little bit strange, when you see it for the first time. Now I'll try to explain where it comes from, and give some logic to. Okay, so the different. Okay, just motivation. These last two remarks, actually, especially the last one, says that different might be an interesting invariant to look at, and especially because different measures the wildness of extensions. If residue field extension is inseparable, we have wild extension, so it's natural to check how wild it is. Okay, and so we'll indeed look at different. Now the definition, which I'll use in this talk, is the different of separable algebraic extension, is absolute value of annihilator of omega of integers of L over integers of K. And yeah, before Arthur says that this is incorrect definition, let me make two remarks. So first of all, we use multiplicative language, yeah, a small remark. So usual different will be a minus logarithm of what is written here, like exponential distance instead of distance, so we use multiplicative language. And important remark is that this definition is the right one, only because in our situation omega L0 over K0 is of almost rank 1. It's a subquotient of L0. So in general, you should use a smarter definition, which involves fitting ideals or something like that. So it can be, it's doable, but it's more... I prefer a simple definition for the talk, because one just proves by hand, since we are sort of one-dimensional situation, we are only studying curves of algebraically closed field. In our case, one can show that this module is a subquotient of L0. So this definition works. Okay, also one can consider log different. We defined similarly, but using logarithmic differentials. If K is discretely valued, when these two are related by uniformizers, and in general, there are no uniformizers, and they are just equal. So classical Riemann-Hurwitz formula for morphism of algebraic curves is written using different, so I just put it here, but it's in Robin's book, and it's very classical. Okay. And now the different function. So now naturally, we'll denote by hyperbolic the set of points of types 2, 3, and if you wish 4, but again, 4 is not important for us. And to generically tile f from y to x, we assign different function, that is, to any point y, we just assign different of hy over hf of y. Now, a small remark, a side remark. We work with no discrete valuations, unless we consider trivial valued case, which is not interesting. Otherwise, valuations are not discretely valued, and then there is no difference between different and logarithmic different. In fact, what I define, because better if you interpret it as a logarithmic function, but we do not have to distinguish, because just, like on site, but maybe. Okay. Now this guy measures y, so f in particular is just constantly equal to 1 in topologically tamed case. And it easily explains all phenomena we saw so far. I want, first of all, to give you intuition why this is helpful, after that I will formulate stuff like that. Okay. So, first of all here, it turns out that in mixed characteristic, the difference is always larger than absolute value of p for extension of degree p. So, there is a minimum, and this minimum is achieved on the very bad locals, on the curve, which connects the two ramifications points. It's very natural. And the different, yeah, we measure different on y. So, this is y2x, so here. And also, so this is absolute value of p, and it decreases, it, excuse me, increases in all directions with constant slope p-1, or degree p-1. It's monomial function of constant degree. So, on the distance absolute value of p to 1 over p-1, the difference becomes 1. And this is precisely the boundary of the wild termification locals, when different reaches 1, wild termification stops. So, it completely explains what goes on here, and it turns out that here also one can describe the picture a little bit more in a funny way. Again, it's absolute value of p here and here, so just slope 0. It goes with slope 1 in all directions outside of here. So, it goes here and here with slope 1. But from here you have slope 3. And this leads to the super singular point, where you have more ramifications. Everywhere else, you have just slope 1, so you can, from this picture, completely describe the wild termification locals. It will not be a metric neighborhood now, because difference is changing, but it will be sort of conic neighborhood. Okay. Now, general facts. So, first of all, it's piecewise monomial on intervals. Probably, this is due to little bit of a moment. We proved also, for type 4, we improved a little bit. The idea of proving this is very simple. On intervals, we can divide interval to pieces, which are embeddable in Anouly. And between Anouly, you just write down a series, and you compute, and you see. It extends to a piecewise monomial function, which is on any interval it behaves like a piecewise monomial function. Also, to type 1 points. And one can describe the limit behavior at type 1 points. So, it turns out that it becomes constant if you have tamer ramification. For example, like here, in positive characteristic, you may have wild ramification type 1 points, and then different vanishes. Where, and you can describe the slope. And the main property, you have a balancing condition at any type of 2 points. And this is the most important, because in order to understand this situation, we would like to understand how it happens, because we have slopes 0, 0, and everything else is 1. We have points 1, 1, 3, and everything else, and so on. What is the combinatorics of this picture? The answer is here. It's just a sort of Riemann-Hurwitz-formal, an analog of Riemann-Hurwitz-formal. So, this is precisely the genus part of Riemann-Hurwitz, and local contributions are also sort of something very similar. And in particular, almost all guys here are 0. So, almost all slopes equal to insiporability degree of extension of raised fields minus 1. So, almost all local terms vanish. Okay, now what about the proof? I'll really give a very brief indication of the proof. So, proof of balancing on the, ah, maybe I have to make one remark. It seems that this formula is a close relative from my discussion with Ahmet. Very probably, it's close relative of formula for vanishing cycles of Cato. And probably, this formula was described by Rino for covers of curves over DVRs. They seem to be independent. We work here not over DVR, and so on. So, formally probably there are no implications, but it must have relative. But, again, I do not have any formal claim here. But our proof is very simple. It takes about two free pages. Idea is that delta F is a family of difference. So, just if I definition of different. So, we consider a lattice in omega X. We consider a sort of a lattice of integral differential. Just minimal OX0, model which contains differential of OX0. And then consider omega Y integral over public of omega X integral. This is a torsion sheaf of K circle models, whose stocks are almost cyclic and precisely measures the different. And then, okay, when choose some element of K0 of absolute value equal to different. And compare reductions of omega Y integral and public of omega X integral, but shifted by this number. And these two reductions produce a nonzero-miromorphic map between over reduction of between omega C. Ah, sorry, it should be public of omega CY, CX to omega CY. And the balancing condition just boils down to computing degrees of this sheaf by poles and zeros. The degree of this sheaf is precisely the left-hand side of balancing condition and local terms will give you. Okay, so I just should say here that if extension is inseparable, in general, the natural map from here to here is zero. So, there is no interesting map. But if such a picture appears as reduction of map between analytic curves, then we can produce an interesting miromorphic map. And it is responsible for slopes of difference. Okay, now minimal skeletons. So, we say that a branch at the point of type 2 is trivial with respect to different if the slope is the expected one. If you remember, almost all slopes are equal to some number and then they do not contribute to local terms to balancing condition. So, in such case, we say that this direction is trivial from point of type 2. And theorem says that if we are given a skeleton of X and we want to enlarge it and put inside a skeleton. So, we start with, we fix something on X. To avoid situation like P1, where we can choose different skeletons, so we just fix some skeleton of X. And then we ask what can be the minimal skeleton of X and Y, simultaneously, which contains the skeleton of X, given skeleton of X. And the answer is given by a theorem. If we have a skeleton and we take its primate, gamma Y, a priori gamma Y does not have to be a skeleton, but it turns out that we do get a skeleton of F if and only if ramification locus of F is contained in the vertices of gamma Y and, for any point Y, all branches out pointing outside of gamma at delta F trivial. So, this thing actually produces you, at least in theory, an algorithm, given a cover how to find a stable model of what you have above. For example, here you start with obvious fork-hendredates for bad points, ramification points. So, we should include them. We should take the convex hull. But we see that there is a strange point on the skeleton where the different behaves not normally. So, we must include this direction. When we include all this stuff, the minimal such non-trivial difference, when we are done, we get skeleton of both. Okay. Okay. And also, different controls with set of if the degree is P, when the different, as I explained here, for example, and on that example, different completely controls the non-splitian locus. But if the degree, for example, is P squared, or PQ, when it may happen, where there is a huge locus where a degree is P and where a degree is P squared and so on, it cannot be controlled by single invariant. So, the question about all non-splitian locus is still open, different cannot control it in general. And this leads to the last part of my talk. So, let's say that a close subset of a curve is gamma radial with respect to a skeleton gamma, if there exists a function on gamma, such that S consists of all points, whose exponential distance, inverse exponential distance from gamma is bounded by R with... Okay. Maybe it's better to draw a picture to explain it just by a single picture. So, let's assume this is some edge inside gamma, and there is some function R on gamma from E to R. And the set S is required to contain all points, whose distance to gamma is controlled by this function on gamma. So, it's sort of something like radial set. Okay. So, in all my examples, the locus was such a radial set with respect to different. Okay. So, theorem, there exists a skeleton of F, such that its y-part radializes all multiplicity sets altogether. Moreover, if we found one such skeleton, then the larger skeleton also will do its job. Moreover, in three cases, you can choose any skeleton, whichever you want. These are cases when degree is P, when F is tamed, and F is a Galois. Or even you can take normal curve, which is composition of Radical and Galois. Example. If degree is P, then actually the different completely controls the situation, because we know that outside of skeleton everything is different trivial, different decreases with constant slope, so we get just different, gives you the radius of the set MFP. Okay. How to prove it? It's really very simple. I call it splitting method. It's used a lot in theory of valued fields. So, if you are given a problem of value field, you often want to solve it as follows. For tamed extensions, for degree P extensions, usually you can do it by hands. When extend to compositions, and you get Galois extension, because any Galois can be split to wild, and tamed, and wild can be split to degree P. And finally, use some distance to get with no normal keys. And the realisation theorem is proved just by this method, but sort of globalised, a little bit globalised. One uses that category of italic covers of a germ of a curve at a point. Actually, it's the same as a category of italic covers of a spec of H of X. So, you can split, locally you can split morphism like extensions of valued fields. Okay. And finally, okay, maybe I'll just, since I'm starting to be out of time in a minute, I think, I'll just say the last thing very quickly without giving details, spelling out details. The last question is, once we have a realisation theorem, you would like to know what the radii are. We solve it for degree P, this is different. So, is it a natural invariant of extension of completed residue fields? And the answer is as follows. If we just consider this radii, R1, R2, and so on, it's a better invariant because it's not compatible with compositions. I give you composition of two functions, it's difficult to express radii for composition through radii. But you can rearrange it in a clever way. Okay, it will take two minutes to explain. So, just believe me, you can rearrange it just as a sort of function R1 to R1, with I-call profile function, and it's just equivalently original with radii. And then the theorem says that if y2x is generic lethal, when for any point y of type 2, the profile function, the function, which precisely measures all these radii, where splitting happens, where we have locus of degree P to the n, and then P to the n minus 1, and so on, it's precisely the Herbrandt function of extension. So, last two comments. Even to formulate this theorem, one has to extend Hermicash's theory to non-discrete setting. Actually, it was not existing for non-discrete, you just take proof of ser from local fields, and write something about almost monogenous extensions, it's doable. And when you can formulate the theorem and the proof is again straightforward by use of splitting method. And for simple extension of degree P you just compare the two theories because both are controlled by different. And the last is that this radii, a priori defined only type 2 points, but obviously they form some piecewise monomial family, so there is a result about this. And, okay, thank you for your attention. So, thank you very much. Questions? Just so you mentioned the fact that you consider different, so along the edges it is piecewise linear, but is it so, now what about continuity? It cannot be continuous in Bjergwish topology. For very simple reason. The locus I describe you, metric neighborhood, it's not closed and not open in Bjergwish topology. It's closed in metric topology, but metric topology is much stronger, it's not locally compact. Bjergwish topology, your curve is locally compact. So, it's not... Okay, at Bjergwish conference I would stress this point. It is semi-continuous for the Bjergwish topology by easy. It is always less than or equal to what you think. Yes, semi-continuous you have, but that's all. And it is continuous for metric topology, it follows from this radialization theorem. And it is definitely not continuous in some cases. Okay, like what you said, it's pretty good. How does your high ramification behave in towers? Air-brand function, in this case again, because you are almost monogenous, you are sort of rank one. Composition of air-brand, when you have tower, air-brand function is a composition. But for profile function, which you gain, it has very clear geometric interpretation profile function. You just consider an arbitrary interval here, from gamma to type one point, you take its image. This is zero one, this is zero, okay, zero one. And you have some restriction of f, gives you some map. So, you have absolute piecewise monomial map. This is the profile function. If you are radial, it's independent of the choice of this interval. So, for this definition, it's obviously compatible with compositions. For brand function, you should prove. But it's like in classical theory of high ramification.
I will describe the structure of finite morphisms between smooth Berkovich curves. The tame case is well known so the accent will be on the wild case. In particular, I will describe the loci of points of multiplicity n and their relation to Herbrand function and the ramification theory. If time permits I will also talk about the different function associated to a morphism.
10.5446/20239 (DOI)
Thank you very much. I am thankful to the organizers for the invitation. And I am very happy to talk in this conference for Professor August. The book notes on Crystalline Co-Homology by Bertel August was a Bible for me in learning Crystalline Co-Homology. And I brought that book every fair thing I was young. And he is now completing a book on big book on logarithmic geometry and I hope that that book will become also a Bible for young people. And my talk is related to log structure. This is a joint work with T. V. Fukaya and R. Romi Yosharihi. So, first I compare the characteristic zero situation and characteristic p situation. So, here is the modular space of the brilliant varieties and then there are many compactifications. And then there is a Borrel cell compactification. This is O.B.C. in the case of O.B.C. And here also are some deductive Borrel cell compactification. And here the so called troidal compactification. And here is the satake-B.B. Borrel compactification. And we consider similar thing for the function field case. Here is the modular space of the drift effect modules. Which appears for function field case. And here the so here already there is a work of Kaplanoff and pink compactification. This is and this is Kaplanoff in my understanding Kaplanoff did the case if he calls function field is equal to this rational function field case. And pink constructed this in general. And this is similar to that the above satake-B.B. Borrel. And now here the so that for why we I wonder for what is happening there. And we are with these collaborators. They are like wrote a paper on this analog of the deductive Borrel cell compactification. And this paper will be posted to the archive soon. Just maybe in 10 days or something like that. But we are trying to do that construct the troidal compactification. But still there are several things to solve. So I hope we can succeed. But I cannot say perfectly that we did such a compactification. And this is the map to the some analog of deductive Borrel cell compactification. And then here is the troidal compactification. But as I said, this is not yet written that paper is still not yet perfect situation. So that is a small question should be here. But this is maybe too early to talk about this. But these are related closely. And so this one is some preparation for the troidal compactification. And so I need to talk a little about this one because I hope to present the total picture. And concerning the troidal compactification, pink published paper, summary paper already in 1994. And to my knowledge, the details are not yet written. Probably our method is a little different from the method described in the short article. And so log appears because the troidal compactification for the Arbelian Varities case is a modular space of log Arbelian varieties. And this troidal compactification is a modular space of log Drinfeld modules. So the log appears. And these three, the collaborators and I are studying Iwasawa theory. And the reason why we need this troidal compactification is that we are studying the Sharifi conjecture, which was formulated in the paper of Sharifi in the analysis of mathematics in 2012, in 2011. And that is a new theory in Iwasawa theory, which uses the boundary of the modular curve. And these three are collaborating to generalize his conjecture to more general situations of similar varieties or these varieties, and not from GL2 theory to GLN theory. And then such boundary becomes necessary. So that is why we need this for the new study of Sharifi conjecture for function field case for GLN. And then so first I introduce omega and x. So we consider the three situations. So here f is q and f is q here, and f is a function field in one variable over some finite field. And then we consider the finite dimensional vector space over f. What is the difference between A and B? That will appear. Yes, yes. That will appear here. And here the dimension v is dDd and here the dimension v is 2g and v is endowed with a pairing, f, f, it is qc. that is non-degenerate and antisymmetric. And so here D is also dimension. And I need a place infinity. So for A and B, then infinity is the Archimedean place. And for C, then we fix our place infinity of A. This is for this. And then I first define omega. So when we talk about omega, then we assume that we are in B or C. And B then we simply assume that base is already fixed. Base is fixed so that it is identified with this thing. And this is the standard one. And in C then, B is equal to G. And then in B then omega is G. Then maybe C is maybe most important for us. I first define this. This is... I also define C. In the case of C, then C implies that C is the completion of the R. So infinity is the local field at infinity. So C is the completion of the closure of infinity. So infinity is the local field at infinity. So that is equal to R for A and B. And then this is... This is G1 to GD, a linearly independent over infinity. And in the case of B, then omega is the Z-gelapa half space of degree G. And A is not important for us, but I discuss about A. I talk about A for comparison with other things. And then... Next, so then roughly speaking, this is a modular space of the Dlinfeld modules of rank D. And this is a modular space of polarized Abellian varieties of dimension G. And so then now next I define X. So X is if X be infinity and X be infinity and this, in the case A and B, there are... that is... C and this is for B. So this is the... Norm's zone on V infinity, if infinity turns to V, modular homotocity, that is... This is a norm... I will explain this. And then this is a norm zone on V infinity, compatible with this. And divide this. Here the... this norm is... Norm means that the norm is in the case of A and B then it is a function from V infinity to R. Such that for some basis, then... I don't know, for AC. Yeah, yeah, yeah. Races then... then... OK, OK, A, B and C. Yeah, yeah. Such that... for some basis such that... such that the mu, mu A1, E1 plus A and E, N is... E, D is... Yeah, maybe, maybe. This is A and C maybe. This is equal to A1 square plus this one, 4444. This is 4A. And for function field case, this is the max of A1, A, A, D, R. This is just absolute value at infinity. So here, A, A, R in infinity. Oh, yeah, yeah, A, A, A, in infinity. So in the case C, you are working with the analog, what you call C, which is F infinity by roof. But the value group is not the... is less than the real. So this means you don't include norms that... to the value group. You multiply maybe AY by constant. Oh, no, no, no, sorry, sorry. That... that is a problem though. No, in the analog communication, you multiply absolute values by some real constants. If you want to get the world space of periodic norms, it is the space which is sometimes considered... depends on what you want to consider, but... Sorry, ah... You want to consider, but some people consider like... in the whole, like, I think, Goldman and Deva. Ah, sorry, this is a real value to normal. Yeah, but in coordinates should be maximum... some real constants multiply by absolute value. Oh, maximum constant, but we... there is some basis, and so that is OK. No, no, no, no, no, no is the... is this standard... has a standard form for some basis. Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, just the... for some basis then the... the... the function has this nice shape. That is the definition of the norm. Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah. Yeah, and in the case of B then the... the... the norm is compatible with... with... with... with... this means that there is some basis... basis EI... of V... here, now, antisymmetric pairing is given, and for which is... for which is the... the... the... this pairing is a standard one. The... and... and the... and the mu mu mu mu mu A1, A1 plus AAD, A2G... E2G is... is again of... A1 square plus A2G... square. Yeah, yeah. Oh, sorry. In... in non-artimidate case, you can see... it's only a countable set of un-countable... It is uncountable. That is at the political space. No, no, no. You can see the set. It's a loss of countables. You don't have to get countables. Countable space. Countable set. No, you can't do that. You can't do that. You can't do that. But on f infinity, yeah. Yeah. Yeah, yeah. So this is a countable, no maybe no. You can make a convergence. That is like a real vector space because you are considering the values in real numbers and then the countable, no, no, no. Oh, yeah. You are right. Perfectly right. Yeah. Sorry, sorry. There is a... Yeah. Yeah, yeah, yeah, yeah. Sorry, sorry. This is... I am very sorry for this. Yeah. Yeah, yeah, yeah, yeah, but that is yes, yes, yes. I am sorry, I am sorry. I am sorry. Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah. Yeah, yeah, yeah, yeah, yeah, yeah. Very sorry for this stupid thing. Oh, yeah. We need this. Yeah. Yeah, I am sorry for that. I'm sorry because... So, then here you need such square. Yeah, yeah, yeah, yeah. You need... Yeah, sorry, sorry. Yes, so this is a... Yeah, so that is die. We can make it diagonal by taking the basis. Yes. So, then... Yes. So, then now... So, then... Ah, this is important. So, I will not... Not... Not erase this one. But, then... Yes. Oh, yes. Maybe... Yeah, yeah, yeah, yeah. Okay. This should be here, but I think the most important thing should be... Maybe... I think the convention here is that maybe the most important source. Ah, sorry, I didn't... Okay, okay, yeah. So, then... Then... Yeah. So, then... Sorry. Ah, sorry. Then, ah... This is just a multiplication by... multiplication by... by some element, positive. Then, we will have a class of those. Yeah, yeah. And then... Then, ah... So, in the case of A, and then... Then, X... X is... I can modify it to SLD. R divided by SOD. Ah, sorry. Sorry. Sorry. Sorry. And B, in the biggest B, I... X is isometric to ASPG, R... R divided by a maximum factor. And in the case of C, then X is a standard geometric realization of the... realization of the... of the Bvialfitz building... building... for... for... Pgrv infinity, yeah, for this algebraic group. And then we have a map. We have also in the case of B and C, then we have a, if they are there, we have omega. And we have a map from omega to x, a continuous map. This is in the case of C, it is defined as z, d goes to mu, mu is r. Mu is mu A1, Ad is A1, z1 plus Ad, zd. So this is inside the C, but because z1 to zd are linearly independent of infinity, so this becomes a norm. And so this is a Krasov. And then for the case of B, then it is z goes to mu, mu is a Hojimetric, the Krasov is a Krasov, and the Krasov is a Krasov. So this is the Krasov. And then we have a map from omega to x, and then we have a map from omega to x, and polarized by this. And then in the case of B, then this is the Homme-Omori-Shizun. And so next we consider the section 2 is concerning the, first we consider the compactification of this x, and then the total compactification is related to how to extend omega. And so first we consider the extension, enlargement of x bar of x and x bar flat of x. So then the, so first x bar, this is the case A and C. C is C means G, this seems that I deleted something. Sorry. This is the, the first case is G, just the, yes. Second, B is the for the case with pairing, and A and C are similar, but the fields are number q, and this is function fields case. And here then, then this is x bar is the search pair, where p is the f parabolic subgroup, subgroup of p, g l, g l v. And, and mu is, and then p correspond to a filtration, v minus 1 v 0, v m v, parabolic subgroup is a, isotropy group of such a flag. And then mu is mu, mu i 0, i m fair, mu i is in x v i divided by v i minus 1. And then the, and in the case of B then, then some, some modification is necessary. And B is, in the case of B then, x bar is again p mu, fair, fair, p is a f parabolic subgroup of, subgroup of g c s p, v o i s p v, that's the same thing, essentially. And mu is, mu i, here, here, so the p correspond to a, some, some such, such that the, the, the annihilator of v i for the sparing is v minus 1 minus i, for all i. And, and mu, mu, mu 0, mu 0 is, is in x v 0, v minus 1, with, with, with, sorry, I forgot the notation infinity there. And here, here, we can have a pairing for the restriction of the original pairing. And mu i, mu i for, for i, not 0 is, is, is in, in x v i divided by v minus i infinity. Here, there is no, no pairings, we just take the space of norm's, modulo, modetti. And then, the main, now, the theorem is, then we have some topology on this x bar. The definition of the topology is rather involved and I'm afraid that I don't have time. That is the, the, some kind of topology. And I, or, or, or some, or, or some, or, or, or, or, or, or, but, but I, I, I, I am afraid that I don't have time to, to, to, to explain. Then here is the theorem. So that is, for a, b then this is well done. It should be so that our parabolic, which I think relates to the grandian. Since we are symphlectic, you can also have a parabolic. No, no, no, I think. There are some, some exceptions. No, no, do you get all the parabolic subgroups in this way or? Maybe, I, I, I believe so, yeah, but if, if it is not so, then please take this, this, this, this such thing corresponding to such thing. Yeah. Otherwise, the definition doesn't work. So, so I, yeah, yeah, yeah, yeah. So, and this, ah, and we don't know. And, and see the, the, the, this is a recent paper of, joint paper of, of, of, of, of, of, of, of, of, of, of, of, of, of, people including, including me. So, so that is probably new, the, the, the, but, because of, we wrote a paper on, on this and that, that paper will appear, posted, you will be posted in archive soon. But, but, and we are not perfectly sure if this such thing is, ah, I will write a, ah, theorem, the gamma X by the compact household, ah, for, for, for, for, for gamma, ah, commensurable, ah, to, to, to, to GLDZ, ah, if, if these, if, if, if the case A and G, GSP or SP, maybe SL, SL, SL, ah, the, no, SP, SPD, GZ in the case B and SL, SLD, OF in the case C. Now, OF is, OF is a, is a element in F such that, ah, A integral outside, outside infinity. Yeah, and so, ah, so for A and B then, then, ah, the A and B then gamma, this, this thing is, ah, so called reductive Borrel cell, ah, Borrel cell compactification of, ah, and so, such a standard of Jikutan. So, this is some, ah, we, we, we, we imagine that some person already studied C, but, but, ah, we could not find the reference. And there is some similar subject, ah, ah, similar subject, ah, is, ah, X is inside, X bar is inside X, ah, X is inside X bar by, by taking the P to be the, the, the, the whole, whole, whole, parabolic subgroup, ah, whole group, yeah, and then, ah, yeah, so, ah, ah, this is, this is compact and this has a sataketology and not the induced topology. And here, this is defined by using, ah, infinity, parabolic subgroup, ah, ah, in place of, in, in, if, if, if parabolic subgroup. Ah, sorry, yeah, in the case C, C, C, yeah, this is, this is ABC, ABC, ABC and, and then, ah, there is such a story and so, so, ah, in the case, for example, in the case A and, and B maybe, and G is 1, then, ah, the X bar is, is a upper half plane, ah, and P1Q, that is a boundary point, ah, and X2 bar is, is, ah, so, that, that is in this case, then, you have a homomorphism from the, the, this H2X, this is, this is H is, I think, 2, 2, 2X by, as I, I, I, I wrote some here, yeah, and, and then, this is H and P1R and so, ah, parabolic corresponds to, to such, such thing. And so, ah, this is compact, this is compact for the natural topology, ah, but, but this one has a, it's usually considered with, with Satake topology, not in this topology, so, so, then, after, after dividing by gamma, then you have haha space, yeah. And this one, this one for CX bar for C was, is a, is a so-called So, the general compactification of X are studied by already, in the old works of some people, I'm sorry, I, yeah, here, and, and, and, yeah. And so, such similar things are already studied and so, it is strange that the theorem for C is not known, but, but, maybe known, but, and, but the proof is, is just a, proof of the theorem for C is just the imitation of the other, other cases, A and B, but, but still some, some strange things are happening there and so, so the, this paper by three people have, has 50 pages, not so, so simple, that was not so simple, yeah, that is strange thing, but, but, and then, then I think, I, I also talk about X, X bar flat, X bar flat is, is, is, is, is, in this A and B then, in C, yeah, in C then X bar, X bar flat is, is a quotient of X bar and X bar is, is W here, here, here, W is a subspace, a subspace and, and, and, and, and, and, and, and, and, and, and, and, and, and this is, and B in the case of B, B then X, B X bar flat is, is, again, W and, mu, yeah, and then, this is, W is in V and, and then, such that W, W, the manipulator of W is contained in W and, and then, mu is in, in X, W, W, this and, and, and, and this, yeah, and then the map from X bar to X, B is, is a, just a, a, P, P mu goes to V zero, mu restricted by, just mu zero and, and, and, and, yes, this is the definition, I know the B, this, this is for, for A and C and for B then, it is P mu goes to, to, to, to V zero, to, to, to V zero, B minus, mu, mu, mu zero and, yes, no, no, no, no, no, that is, fine, yeah, yeah, yeah, yeah, that's same, same thing, yeah, yeah, yes also, yeah, this is same, yeah, for, for, for all A, B, C, yeah, yeah. And so, that if a subjection and, and so the X bar flat has a coefficient topology of X bar, or one This is another. Also, we can define the sataketopology, which may not be the image of the sataketopology of X. The search complicates things. But I don't discuss such too much details. And again, Seorem is that again, this is how it's looked compact. And for A and B, then this is a result of sataketopology. This is proved by him in 1964, I think, in his paper in the analysis of mathematics. And so C may be new. And we can have, again, X bar, flat. This is also a compact space. This is compact, first rule. And this has a sataketopology, which is different from this space. And this is so-called compactification by Uena of the pre-war-techs building. In the case of C. And A and B, in the case of AB, it is already studied by sataketopology. In another paper, his paper in 1964, something like that. This paper also appeared in analysis of mathematics. So when he was young, sataketopology, two papers in analysis of mathematics. One is this one, and the other is this one. Yeah, yeah, so then these stories are actually, and we have nothing to return down, but maybe the results are maybe generalized to all. All reductive groups are not only GLD or PTSD. And the problem should be the compactification of the understanding of the building of the space of norms may be not so good for generalization. And then we have to directly consider the building, I think. But similar arguments may be OK. It can be down there. And also the version of SU. Of course, in the case of number fields, then those are already known, essentially. And for X bar, X2 bar, then people already studied all reductive groups. Yeah, and so the version for S, S are finite set of primes, S are arithmetic subgroups. S for S are finite set of primes, set of places, including all arachnidian places. Then X is now the XV for all V. And then we can define X bar and for X bar, S are, and then the S arithmetic subgroup group is has no compact. This is in written. This one is written in that paper, which will be submitted in archive soon. So there is some such S version. So these are remarks. But I did not introduce this generalized version because the story becomes too much complicated. So I just considered the case of just one place. And then the final thing which I will talk about is the story of colloidal compact fiction. But as I said, already the paper is not yet written. Just we are starting to write. And so here can happen as usual that some terrible problems appear in the writing and then searching. At present, I hope that the things can be solved. But I'm not so perfect sure. But I write the picture section three. The first is we only consider B and C. In the case of GLD of the Arachnidian number field, then there is no analytic structure for the search space feature appears here. And so we consider B and C. And then the MKB, in the case of B, this is a modular space of, so K, G is GLV. And this is a B case, GSP. And this is GLV. And K is an open compact subgroup of G, AF, finite part. And the finite part means just this is without infinity. And then we can define such modular space. And modular space, MK is the modular space of polarised Arbillian varieties with K-level structure. This is for B. And this is the modular space of the dream-field modules of dimension G. And this is modular space of dream-field modules of rank E with K-level structure. This is B and this is C. And then, as is well known, you have a presentation of MKC. So I talk about only analytic theories. We hope also the algebraic theories can be done, but today, and only I present a very rough story. MKC is, it is well known that MKC is written as K. So this is for B and C. The omega was, in the case of C, then omega was dream-field upper half-plane, upper space. And then, this is then let MKB be flat, MK flat B. This is modular space, the compactification of MK by Bayley-Borrell, in the case of B. And the capra pink in the case of C. And then, and also MK bar, bar-troll, the toroidal compactification. In the case of B, then it is constructed by Manfred and other collaborators. And in the case of C, then I believe that we can construct this, but the paper is not yet well. So, but I hope to present the picture. The picture is that the, so this is not unique, not unique. And there's no, and here this is a standard one, unique, unique one exists, standard one exists. And then MK bar, bar be the inductive limit of all toroidal compactifications, all MK bar, MK bar, there are many. And so then, then we have inverse limit. And then we cover diagram, cumulative diagram. MK bar, bar, bar-troll, MK bar, bar flat. And here you have, you have X bar GF, Gaf, FF, divided by K. And you have GF, X bar B times Gaf, F, flat K. So, that is, then, not that we have a map from this to GF, X times. So we have a map from monogat to X, so you have a map already here. And then what I am saying is that this map may be phi. Phi can be, phi is extended to continuous maps. Phi is extended to continuous maps. And this one, this one is in the case B, case B, that is a national number, this is case, this is our Avalian variety case, this is a homeomorphism. That is, this is the, this is a Sadake compactification. So this is just a political space, but this is an analytic space. And so that is Bailey and Borrell put a complex analytic structure. So maybe C should be here. Complex analytic structure on the topological space feature, Sadake constructed. And so this is the picture. And then what I have five minutes left, but I, in any case, the paper is not written, so I can give only a love story. So then how, how the, so concerning this then, I have one, so just small thing. So that, that is, so the love story of log dream belt modules. But I am hoping to describe this Barl version. Without Barl then the, the, without Barl, the Trojan compactuation should be defined as the quotient of this Barl version. So then I describe this in short, so, well, but if it's module then. So let S be a log smooth, rigid analytic space. I am talking about the analytic situation. So log smooth, rigid analytic space over, so we are considering C, C, C, yeah, yeah, analytic space over C. Then, and the log smooth then we have S Barl that is a limit of, of, of blowing up of, of S along log structure. If the log is given this way then we, we blow up, then you have this and then we blow up this, this is intersection then, then you have this is S and this is limit of, limit of this is S Barl is the limit of, of, of, of such blowing up. And then the, the log dream belt module is a, is a, log dream belt module, module over S infinity over S, S, S Barl is, so that is, that with, with cable level structure. So that K, K is assumed to have, we are, assume, assume the, for simplicity the, the G, GL, the OV, other, that is, yeah, then, then is a, is a such thing is, is a, is a pair of, pair of V and L and V and lambda, here L is a, L is a line bundle on S Barl and, and so the S, inside S, the, you have open set U, the, this is a part where log is trivial, the part log is trivial. And then V, V is a, V is a, and then you have, you have then here you, the blowing up doesn't change U, so then you have J here and then, then V is a, V is a inside here and, and then this is a, this is a, this is a D dimensional F space, C for V, C for V, D dimensional F space and lambda is a, is a level structure, K level structure, the, the, the A, F, F, T, but, cosidate module, K, the, the isomalism is, the only, J, J, oh yeah, thank you very much, thank you very much, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, and then, this should satisfy the, the, the, the, the, the conditions are, conditions are, there is some, for example, there is some condition is the following, yeah, so for all such that, so which topology is it, is it? Ah, yeah, yeah, yeah, yeah, that is, I think there's a rigid, rigid topology, oh such thing, yeah, this is, The limit of the rigid topos, Oh yeah, yeah, I think so, yeah, yeah, such that, such that the, all T in this, but then, and all S in this, we have the following, so maybe just T is enough, yeah, all T then, then we have, we have, we have, this is, suppose, stock at T, V1, in the stock of V then, we have this, and, and, and, such that, satisfying having the following property, that property is the first, V0 is the, the inverse limit of, you know, inverse image of, image of, of, of OS, VAR inside j star OU, ah, ah, ah, ah, ah, intersection, intersection, sorry, sorry, V is V, V, sorry, V, V, and intersection OS, OS VAR inside, inside j star OU, yeah, yeah, and, and furthermore, the condition is that, then, from OS VAR to, to, to, to see at T, the, you have a, is, is, then, then induces, induces, ah, injection, this, this is injection, so that is the, the, the, we have a norm of, for this, so then, so from this then, ah, norm 1, on, on, V0 appears, ah, V0 infinity, ah, this is V0 infinity, appears, and the second condition, so that we are, we are, how, how I am actually showing how we get such V, V, V, V, V, V, the, the point of the, of the, ah, X bar, yeah, yeah, and then, then, ah, the, the, then, another condition is that, second one is that, ah, sorry, I need to, to, to, to, to, thank you very much, I am already, I need to, ah, sorry, sorry, ah, so then, then, ah, another is that, so, there is a, you have a, you have, you have, let VIO, FBR, VR, the, the, then, you have here, by, in this here, you have, or V, and then, by taking the inverse image here, ah, you can, you can have a, this defines the V of the inverse image, the integral structure is defined, and then, ah, then you have VIO, then, then for all I, ah, ah, ah, yeah, the, you have a, VIOF divided by VIO minus 1, VIOF, ah, from here, then, you have a, JOU, divided by VIO minus 1, VIOF, and then, you have a exponential of VIO minus 1, VIOF, ah, this is a Dlingel exponential map, ah, and that is the, this is the additive map, ah, so, ah, then you have J star of U, and, and the condition is that, by, by, the, by, by, by, by, ah, then, the image of, image of, VIOF minus, minus, minus 0 is in, in m minus 1 inside, inside J star of U, this is the, m is the log structure, ah, that is the, ah, CF feature, ah, yeah, that is the log structure, and then, that is the inverse of the log structure, and then, then, ah, the last, last thing is that, the, another condition is that, the, the further condition is that, condition is that, there is some homomorphism from, from m of, of, of, of, of, plus, this is the log structure, S, S, S bar, T, 2, 2, 2, 2, R, additive, such that, the, the, the composition, lambda, OF minus, lambda of minus 1, OF minus 0, ah, M inverse, and then, ah, this is by, exponential, ah, yeah, then, then, M, F, close to F inverse, H, all, all are D and T, yeah, yeah, the, this composition extends to, to a, to a norm on, on, on, on V infinity, V infinity is the, I don't know, V i, V i minus 1 infinity, it is, it is a, F, F infinity times lambda i, OF divided by lambda i minus 1, OF, so, ah, so, ah, so the norm, also, norm is defined on, on, on, on, on this, so, ah, then, by this then, ah, the, the, that, X bar and, and the, the, X bar and, ah, the, the, this, toroidal compactification is related, and so, the, the, the, this, Mk bar, bar, tol, bar is, is a, is a modular space of such, such object, so then, if we have such thing, then, you have a morphism, extending S, Soda, yeah, yeah, Soda, that is a fine modular space of, of, of, of, in, in, in this sense, yeah, so, so then, ah, yeah, sorry, sorry, the last part is, the, the, the, two-love story, ah, finish here, yeah, yeah, yeah. Thank you. Question. What is the relation between the curly end and the rest of the data? Ah, ah, ah, sorry, sorry, sorry, sorry, very, very sorry, that was, this, this, this should be eres addicted to you, sorry, the, that is, I was just misunderstand the L's, oh, you, the, the, the, the, the locale, locale exists oh yeah so, I was oh, oh ah, oh ah, yes, I was sorry, sorry, sorry, so this, that is, so, ah, here, this is locale, locale is, localeおおowany, I was, I was thinking of locale sorry, yeah, yeah, yeah. KAVARTY Is capital lambda the capital V over there? Capital lambda I, O, F? Capital lambda I, O, F, sorry. What is that? Oh, V, V, V, V. Sorry, sorry, V. Yeah, yeah, I was confused. Sorry, sorry. This is V, yeah, sorry, sorry. V, yeah, that's V, yeah. Yeah, yeah, sorry, sorry, sorry. Oh, this is just, so I want to have some fun questions on the end. So when you consider J-Law style, if you do it in rigid geometry, then you could consider sections with possibly essential singularities or just morphic. Yeah. So what do you mean here? Is it... Yeah, terrible singularity of the functions appear. But the condition is that the M is reasonable functions, so like this is just something like generator of the device, generated by... If you go... We went to... Yeah, yeah, we went to the inverse limit, but in the middle, then this one is reasonable thing. Here, this is just reasonable thing. And this is invertible element of this and generated by invertible elements and some device, generators of the device. So the condition is that the here, we have a reasonable object. Having not so terrible singularity, just they have poles. They have just, at worst, they have poles there. Yeah, this is having poles. But in general, it has essential singularity. Yeah, yeah, so that is... Every terrible singularity appears, but the condition is... This is the condition. Yeah, yeah, the assumption, the condition is... This is the condition, the big condition. The assumption, this is condition one is this. And the condition two is this. And the conditions that you put is relative to a C-valued point of the inverse limit space. So does it imply the same condition for other points for the Berkowitz point? Or do you just consider C-points? C-points, so I only... C-points, yeah. C-points only. So this is also VI. And then this is... Here is what we have. Yeah. If there are no other questions, let's thank Mr. Casano again. Thank you.
For a function field in one variable over a finite field, we will consider analogues of compactifications of period domains.
10.5446/20237 (DOI)
First, I thank organizers for the opportunity. My title is Realty Log-Porn Cardiology. The plan is to further review some classical statements. Next, Porn Cardiology in Log-Betty-D, Cajomo-D, and third, Logator Cajomo-D. And the notation throughout the lecture A denote a ring, D is non-negative integer, and Cajomo-D is in the derived sense. First, let F is a continuous map of local compactor political space, submersive with respect to the dimensional topological manifold, which means the upper space is locally the product of the lower space and the dimensional topological space. And for any F and G, we have this Porn Cardiology. This is a reactive orientation shift and features locally isomorphic the pullback. As a corollary, we have a following statement. And let F be morphism of complex analytic space, submersive of relative dimension D. Then, since complex marine folder always orientable, have thearded Baby, do we have the bracket D also? Because you work with D sometimes. Yeah? On the right. The algebraic corresponding thing. Installation. In other words, in our side, the same statement, same formula for each the whole okay, I know all objective. Mama 0.1, and 0.2, 0.2 in log geometry. This is the subject of today's lecture. Sorry. Is it OK? Sorry. Section 1. First, we explain the generalization of 0.1. This is geometry and topology. Log analytics space, vertical, exact, and log smooth. We call that exact means this is a Cartesian. And any fiber of F log is which is topological manifold, as explained soon. And this fiber is of dimension 2D. And we have... This is the correct... Yeah. I explain some briefly the proof. This theorem is based on the rounding theorem. This means F log is submersive with respect to even dimensional topological manifold. To show an essence, we consider an easy, simple example. Consider the diagonal homomorphism from N to N squared. Then our F is... Standard log plane to standard log line. If we consider the associated real blow up space, then the upper space is circle, cross circle, cross this one. Then the picture is as follows. This fiber is this. This fiber is this. Special fiber is broken but homomorphic to general fiber. So then there is a homomorphism. Like this. So corner is rounded. Then apply... VTAs 0.0 for this map and the care orientation. Then we get 1.1. This joint work with Arthur is one of most exciting and fascinating experience in my life. I explain why. Of course it is a great honor for me to work with Professor August, but more. We know in general in a joint work, contribution or role of each author is not necessarily the same. Sometimes an author does little, another does almost all. An author only gives an idea, another only gives proofs, another only writes down and so on. But in this joint work, I feel our contributions were exactly 50-50 because we enjoyed all the steps of study together. Starting with simple example, this one, and proceeds, proceeding to more complicated ones, discussing proofs, generalizations, and how to write them up. So I thank my god to arrange such a nice collaboration. Thank you. I proceed to the erratic analog, which is analog to 0.2. If you don't assume vertical, you will have manifold with boundary, of course. But if you don't assume, then you will have manifold with boundary or something like that. And then there is still a quadratic using extension by 0 from the interior. In that paper, non-vertical case is also treated. And then this is modified. That is another story. Today, for simplicity, we are restricted all things under the vertical assumption. There are two kinds of logator cohomology. Both are due to professor Katow. I briefly review the definition. Let X be an FS log scheme. The first one is the ket site. Object is kumar and logator. I don't give the definition of kumar, but it is equivalent to exact and logator. And exact definition of exactness is already given. Typical ket covering is... It is such a thing. And now, two variants of logator cohomology. This ket definition is more similar to the usual et al site. For example, ket map is an always open map. Another site is a full log et al site. Object is all log et al. And covering is... Covering surjective family. Universal surjective family. It is because exact and surjective family is universal surjective, but that is not true anymore. So, it is only in the FS, the ket site, that the category of FS log structures. The difference lies in log blowup, which is log et al, universal and universal surjective, but not exactly. Typical example is... If you leave out one of the script transforms, it is no longer universal surjective, is that right? So, if we remove some point, then it is not universal surjective, because the base change is the same thing. This is the open margin, but not surjective. Here is a morphism of topo, topoi, from the log to ket. And then, 2.1, exactification. The source is quasi-compact, and target has a chat, then in FS log scheme, then here is a log blowup of the base, such that the base change is exact. Do you have to know the frame, rather than the chart? The frame is enough, I think. So, we get out topology generated by ket topology and log blowups. The next lemma is... Pullback and push forward, then we have the original one. Push forward by kappa, and pull back by exact if, commute. And third, it's a change of the role of kappa and if, another assumption, exactness. And more... Let P be a log blowup in the ket side, in the ket context. This is by Fujiwara Akato. And further, if F is locally constant and constructive, if shift of a module, then push and pull back is the same as original one. And push forward is also locally constant and constructive. And about the full logator, the situation is much better. And that is, this gives an equivalence of top one. Using this proposition, we can prove various fundamental results, explained soon. By going from ket to full and vice versa, by exactifying the... by log blowup. And to this log blowup with this proposition, under the exactness, we can use this proposition. And so... Now, I have to explain fundamental theorem. I want to... 10 theorem. No, no, no, no. So... ket with proper base change, smooth base change, proper smooth base change, homological dimension, homological dimension 2. More theorem... today, these are all necessary, but... It's difficult to read what you write in this part. Because... Because there are the shadows. So proper base change was an extra method of... Smooth base change. What did you write above? Proper smooth base change. Looks like true. Cool. And not... And something new and something old. This is new and this... this is new. And published one is this and this, only two. And this is... this four theorem we do took, Professor Katow. And where I explain... Not naive. Where not naive means... Holes under the condition... as some... as strange condition as follows. Consider the concerned Cartesian. Take any point. Consider this homomorphism of monoids. And the condition is... any... Any element of the associated group of this monoid. If f-group of A belongs to m bar, and g-group of A belongs m bar inverse, then A is trivial. Under this condition, proper base change and smooth base change... Ket proper base change and ket SbC holds under this condition. And this is, in a sense, best possible. Not naive, but also naive condition. And other... other statements holds naivety with no problem. And we remark... remark of the condition star. First... If f or g is exact, then this condition holds. So under the exact nice condition, we can freely use every base change theorem. Next, if condition... sorry... If Cartesian diagram D in FS log-analytic space satisfies obvious analog of this condition, then D log in topological space is also Cartesian. This is published. So it seems rather strange condition, but... something related to a good geometry. And CD1. Oh, sorry. CD1 is a following statement. Let x y let f is compactifiable. This means underlying is compactifiable in FS log-scheme, A to the right. And let D is a relative dimension. R is a relative log-rank. Then... The homological dimension is 2D plus R. This is a ket... in the ket context. And 2D plus 2R in the net context. And this is sharp. How do you define proper support, direct image, because the compactification is only a scheme, it's not a law. Okay. Okay. So... So... This is a definition. And CD2. And assuming further that, f is a log-flat and a local final presentation. 5 bar dimension is there. Then we can drop R. Of course, there is no time to explain the proof. So I only......state that gives us structure of the proof. And PSB is rather complicated. So, let PBC plus let SPC implies ket exact PSBC. And this implies let PSBC. And finally, this implies ket PSBC in general. Like that. So, I'm surprised to know that a log-blog is a log-flat. Sorry, what is your question? Yeah, yeah. People might be surprised to hear that. So now we come to... Sorry. Let's get to the main theorem. And......particle proper log-sense and any fiber equity damage. So, let's go on. RF. This did... The advantage is that basis are vitally, but clearly there are some problems. Properness shouldn't be, of course. So, problem one, non-locary constant F and non-proper F. That's okay for the case where Y is a standard log point. This is an ordinary result. And this is proved by embedding X locally into a log-smooth......log-legular......log-smooth......embedding X as a log-locus into a toric variety calculated express tree. You want to take RF flow or shrink in the non-proper kale or RF flow? Is it like a per diem duality, so with RF flow or shrink? On the right-hand side, it could be a flow or shrink. Okay. Okay. And problem two, non-particle case. This is a different story, and I don't give no more comment. And... I'm trying the proof. This is for the cat or for which ethyl topology it is? For the usual or the cat topology or the full ethyl topology? Both. Yeah. Sorry. Step one. Assume F is saturated. So the main difficulty is to construct trace-form homomorphism. And there are three steps. First, assume F is saturated. Consider the diagram. Cartesian. Fogeting log. Okay. And then consider the... Sorry. Then since F is saturated, then R... F is saturated. And the homomorphism is dimension. E to Pq equals 0. If P... Not only homological dimension, but also some... Only this is okay. And then... Anyway, this term is Spanish. The E to time only lives here. So see this point. Then vector sequence gives... E to 2d0 to the... Then this is the subjection. And by proper base change, this is... By proper base change, this is the pullback of the push forward of this. So epsilon, up a star. Then this is classical letter homomorphism. So classical trace in SGA4. We have... Then we have... We can prove that this... This homomorphism factors the... Factors the... That subjection. This factorization is non-trivial, but... By proper base change, again, we get to case 4.2. So step 2. F is exact. Then take a cat cover. Such that... F dash, F prime is saturated. Then the underline of these cartesians is also cartesian. Then trace for F prime group. Because the classical... Trace for F prime is constructed by classical one. So the classical cartesians... Under classical cartesians, they behave well. And finally, the general case reduced to... Exactly the exact case by log, blow up. Oh, there is no time. So... Thus we have trace morphism. And by adjoint. And this should be an isomorphism. Again, problem 1 is to be fixed, but I have no idea to prove this is an isomorphism for the present. Anyway, by formal majority in cat, it gives an homomorphism in the statement. To prove it, it is an isomorphism. First cat, exact case. By PSBC, both sides are... Locally constant, constructive. And then by PVC, reduced to log point base case. And applying copper star, then that exact case. And finally, again by log, blow up, we have the general case. Thank you very much. Are there questions for Professor Nakayama? Technical question on step 2. If you want to make an exact guy in the saturated... Why you say that the copper retar is enough? Maybe you have to take woods, not prime to pig. I think this is needed to... If you have an integral map, no, not integral. If you have a Q-integration, you want to make it integral. I'm not sure about this, but... I forgot the detail, but it is possible. One day I have the same question, and the preprint has a gap, but somehow it is fixed, so we can discuss it. Are there other questions? Luke, or the general question. When you talk to a unit, it's Q4. If an F-out of 3 is a formula, write a joint, and then try to calculate F-out of 3 in nice cases. Basically, maybe this is not anomorphism. So what goes on? Why does the naive imitation of this G4 doesn't work? This is a basic point, but I do it. I think G4 is by reduction to curve. You have the usuality of curve, and some possibility of opportunity, you use something in Taiwan. So here, maybe you could find a way to curve. I don't know. What goes wrong? Why do you take this sort of complicated task? This is already the problem in logpoint case. Perhaps there is no good vibration in logistic. You have this given here. I just wondered if you could write a simple example where star isn't satisfied. Star is satisfied. An example there where it's not satisfied. We don't do log graph and log graph. I have a question. Global reality. Any hope for local one? In fit me? I mean you are writing complex here. Oh, you are writing complex. It is a problem. No more questions.
Ogus and I proved the Poincaré duality theorem of Verdier's type in log Betti cohomology (Geometry and Topology 2010). I discuss the l-adic analogue, that is, relative log Poincaré duality theorems in log étale cohomology, together with other fundamental theorems in log étale cohomology.
10.5446/20235 (DOI)
Thank you so much for inviting me. It's a real joy to come here for such a wonderful occasion. Also, since I'm the last one, probably should thank the organizers for their wonderful job, and it was absolutely great to be here. Well, anyway, so let me, I will talk about a taltapoloche situation, but let me remind you first the theory that was developed by Kashiwara and Shepera already 30 years ago and their book Ships on Manifolds. So here is the situation. Let X be complex manifold of dimension n, and suppose that I have a constructible sheaf or a sheaf formula will be mean complex of sheaves, all this. Now with some coefficients. And then here is the definition. The singular support of F, it is a closed subset in the Katanjian bundle to X, and it is defined as follows. It is the smallest subset, which is the following condition. Do we have any pair u and F, where u is an open subset in X, and F is a helomorphic function on u, such that if I consider the differential of F and consider its graph, then it does not intersect my closed subset. So if I have such a pair, which satisfies this condition, then F is locally a cyclic to F. Locally a cyclic means that if, well, I assume that the definition is known, but this means that if I will compute vanishing cycle, there will be no, there will be coefficients in F, there will be trivial. So that's the definition. And it's more or less. So this is that, well, it's easy to see that this is a conical complex sub-variety in T star X. And moreover, it's easy to check that the following two completely do it. The second property is that the singular support is empty. This is the same as F equals to 0, and singular support equals X. And when I write X as a subset of Cartesian bundles, this means that it's just a 0 section. Then this means that F is a locally constant and nonzero. Okay. Now, a theorem. A theorem they prove is the following, that it is that, well, it's complex sub-variety. We can look at its irreducible components. The claim is that all irreducible components have a dimension n. Okay, so it's evident in these two examples, but it's true in general. But in fact, they show that the singular support is Lagrangian. Well, it's sub-variety. It can have singularities, but on its open part, it is Lagrangian. And since it's also a conical subset, this means that actually it is every irreducible component of singular support is the conormal bundle to some close sub-variety in X. By conormal bundle, I mean that on smooth part you take the usual conormal bundle, and then you take the closure. Okay. Well, now, so that's their first main theorem in this situation. And here is another theorem that let me write it maybe on the second board. It is, now suppose that my coefficient, well, is a field. Well, then the claim is that, well, you have a collection of sub-varieties of middle dimensions, the Cartesian bundle, and the claim is that one can naturally assign to every irreducible component a number in integer, which will make it a cycle, which is called characteristic cycle. It's denoted in this manner so that the following properties will hold. So the first property is that suppose that we have a pair UF as before. So we have an open subset and a homomorphic function in it. But before we consider the situation when differential of F did not take value in the singular support, but now suppose that it intersects the singular support at a single point. Well, well, then we know that by the definition of singular support, it's true that our function F outside of X is locally a cyclic, so there is no vanishing cycles. And this means that if I will compute the vanishing cycles for F, then it will be skyscraper at X. And so I can compute its dimension. Well, dimension, earlier it's complex, so it will be earlier characteristic, but let me write dimension. Well, and the claim is that the following formula for this dimension holds. I should put here the sign minus, and then here will be the local intersection index of we have DF, which is a section, so we have DF of U. It's a sub-variety in the Cartesian bundle. And here, so we consider it as a cycle, certainly with multiplicity one. And here we put the characteristic cycle. So we have two cycles which intersect by one point, so there is the notion of local intersection index at that point. And the claim that it will be exactly the dimension of vanishing cycles. You don't claim that the multiplicities are, they could be positive, negative, or zero. You don't claim anything. I don't claim anything. Well, I will claim anything, but in a short while, okay? It's an equality or inequality? It's equality. It's equality. Well, so the second assertion is global. So assume that X is compact. And let us compute, let's consider the earlier characteristic of X with coefficient in F. So this means that it's, we compute the commulogy of X and compute the earlier characteristic. And the formula says the following that it's just the intersection index of X and the characteristic cycle. So here is the situation. The Cartesian bundle itself is certainly non-compact, but since X is, you have zero section and X itself is compact, then certainly you can intersect X with cycle of complementary dimensions as well defined in the, in the terms of the claim that you have, that you have this equality. Well, now the last property is the answer to Wolf's question. And let me put it here. But if F is perverse, if F is perverse shift, then characteristic cycle is effective. Find it for a usual shift. So the same works for bounded complex? Yes, what do you mean the same holds for bounded? You consider the condition that the universal localistic in the sense of the vanishing cycle are five. Yes, in the sense of vanishing. Okay, but you can do it for either F to be a single shift or an object in the derived category. For me, shift means object in the derived category. Okay? So you can do it, but you don't claim that what you get is okay. Singular support is just subset, no coefficients. Characteristic cycle has coefficients and you can have singular support huge, but characteristic cycle being zero. For example, take any F, shift it by one and take direct sum. Succeeding your support will not change, but vanishing cycle will disappear. Sorry, characteristic cycle will disappear. You want to say that this cycle is uniquely determined by F? Yes, uniquely determined by F, so that I will say in a moment. So this first condition actually completely determines characteristic cycle. Well, it's clear that if you have any components, then you can choose, you can find function which is differential in per-sex as the thing at its smooth part and then this formula will produce you, immediately produce you the multiplicities. What is? And not only effective, but with strictly positive multiplicities? Yes, that I did not, yes. And moreover, it's strictly positive, so the support of Cc equals the singular support. So one comment is that certainly characteristic cycle depends, well, it depends on the shift in additive manner. So it's a homomorphism from the K group of, K zero group of constructible shifts to the group of cycles. So the Cartesian bundle that follows immediately from this formula, formula one. Well, the second thing is that let's consider example when my shift is constant. Then as I told you, the singular support, it is a zero section and by the last property, you see that, well, equals x and maybe let me write that characteristic cycle equals minus one power n times x. Well, and then let us see. So this first formula then, it is exactly Milner's formula. So what stands to the right? It is exactly the Milner's number of the singularity of f. So this x means that it's critical point. So if it's intersect zero and here stands the Milner's number and here stands the dimensional vanishing cycle and minus has to do with it because they normalize it to be positive for perversive and not usual one that you can forget. And similarly, the second thing is this, the formula of the Euler characteristic, well, here it tells you that it's self intersection of the zero section of the Cartesian bundle and again because of this sine thing, this is equivalent to the standard formula that the Euler characteristic of many of all the self intersection number of the dag. So at least in this situation, everything is perfectly fine. So one last remark about Kashiwara-Shippura is that their proofs are very transcendental. For example, they actually, the theory they develop, it works on real analytic manifolds and if you consider real analytic manifolds, then you can refine any real analytic stratification to just decomposition by simplices. And in such situation, constructible shifts are easy and so you can work it. But it's extremely transcendental operation and you cannot just push it to a talsi. Just one question about formula two. The right hand side is the hard, if you're hard collage, right, not your own collage. Would you mean by? Well, if you count X, intersection number of X with a zero cycle with itself in the Cartesian bundle. Yes. That gives you the hard numbers, right? Why? It gives you one number. Pardon? No, it's just usual characteristics. Yes. Oh, I should. Before, I see, the first time I heard about this was Wielinski. Yes, yes, yes. Yes, I should, I should certainly, I should certainly have told that their story is, was completely motivated. It came not from topology at all and not from complex geometry, but from the theory of demodels. And in theory of demodels, the notion of singular support of demodels, that's one of the very first notion and basic technical notion, how you just develop the theory of demodels from the very beginning. And the notion, well, it is, there is standard, the basic fact is that singular support, you can define for any demodel, but it will have dimension larger than n. And those demodels which have, for which singular support has dimension n, it's the basic geometric objects, those are Hala-Nomian demodels. And the formula multiplicities, again, are also defined just from the very, from the very definition, like multiplicities and commutative algebra. And the formula, this global, global characteristic formula in the RAM, in the RAM setting, so for demodels, it is, it is, I think it's, it is due to Brilinsky-Dupson and Kashiwara, if I, if I'm not mistaken. But I should, I should stress that in this definition, it's, it's, it's, it's proof is extremely simple. It is, it is, it is one, one line theorem, and compared to quite complicated proof that is, that you do for constructible shears, both definition actually proof us. But, but it's, it's, again, it's one line proof, but it's extremely non-mative. It's just the structure that you deal with demodels and not with germans. And we also prove that the, that when you have, when the coefficients are characteristic zero, then it corresponds to the demodule characteristic cycle. Yes, they prove that it corresponds to the demodule characteristic cycle. Sorry, I should not have, have, have omitted the history, but I, anyway, first of all, first of all, I will, I will omit also other parts. So, well, I'm not so much of time. Well, now the basic question that the story I will be talking about is the question if you can, you can just, just if similar assertion holds in a type situation. And if you, if you look, if you look at this definition, everything, well, suppose that we work now with algebraic, with an algebraic variety over some field, then you can see that everything in principle makes sense. So the definition, they are make sense and you can, you can, you can, all the statements of the theorem make, make perfect sense. And, and in fact, so what I will be talking about is yes, that, that it's okay, but with minor modifications. So let me do the minor modifications first and then, then I will discuss the story. So here is the minor modification. So let's consider first the case of, the case of constant shift and then certainly then the Milner's formula. Well, it's known that it's true, but you should, in case of, it's true in case when the base field has zero characteristic, but if it has finite characteristic, it should be slightly modified and this, namely here instead of dimension, there should, should stand total dimension and the word total means that, well, it's vector space on which the Galois group of, of the disk down below for puncture disk down below acts and in case of finite characteristic, you can have wide, well, termification and total means that you add to dimension the term, this one conductor. And the formula in the corresponding formula for constant shift, it was approved by D'Alene in the second volume of SJ7. There is his talk which is exactly called Milner's formula. So, well, so that's one modification that should be made and another modification I should make on the left, left board and this is, okay. So, this assertion is false and, and as for the rest assertions, yes, they're all true and so what is left is done on the left blackboard, well, there is my note on archive that proves this theorem and what is on right blackboard is proved in, in a preprint of a Takeshya site which is probably also on the archive or it will be on archive in nearest future. Okay. Now, maybe before, before I will continue the talk, let me comment about this situation. Why the thing is not Lagrangian and it is the fact that you really just, the Lagrangian property must disappear in characteristic, but I think it was, well, at least I first heard that from Deline long ago back in Moscow, but, and somehow at that moment you think that there is no theory and stop thinking about this. Well, anyway, so let me, let me give you a, produce an example. Well, maybe, maybe first, first I need notation since I will, it's convenient and I will also use it afterwards. So, suppose that you have a map between, between algebraic, smooth algebraic varieties and which is proper and suppose that I have a conic subvariety inside of the Cartesian bundle to Y. And then it yields nature in a nature in a, well, in a pretty standard way, a conical subvariety in the Cartesian bundle to X, which I will denote in this manner. And by definition, it consists of all points in the Cartesian bundle. So what would be X and covector nu at X such that there exists Y with the properties that it leaves in the fiber and such that Df at point Y applied to nu will lie in C. There is, there is a standard way to push, to push forward the conical, the conical subset for, by the proper map. And, and a small remark that it follows directly from the definition that if I have a shift G on Y, well, let me denote by DOY the category of, pardon? Sorry, sorry, sorry, it's R. Thank you. Thank you so much. Okay, that if I have a shift on Y, then the, if I consider its direct image, this will be a shift on X. And if I compute its singular support, then it lies inside of the image in that sense of the singular support of G. So this thing gives you an upper estimate for the singular support. One small remark that this upper estimate can be clever in the sense that if R is closed in bedding, then you have a quality. But if R, but on the other hand, it can be extremely stupid. For example, if R is Frobenius' map, then this produces you the whole, the whole Cartesian bundle. And so it has, it has just, it just tells you nothing. In the case of characteristic zero, it tells you, always tells you something. But in characteristic Peno, well, now, now example. Let's consider a map. Just R, Y and X will be for us just the coordinate planes. And I want, and I want that it will be, that the map actually depends only, well, that the second coordinate would not change. So it will be given by a formula. Well let's denote the coordinates here as T and Y. And here is X and Y. And this will be, say, G of Ty. So let's consider such a transformation and my shift will be just a direct image of the constant shift on the A2. Just consider just the shift here. Now, and even this, in such a situation, you've produced all possible things that cannot happen in characteristic zero. So let's consider the stupidest example where G of Ty equals T power P plus Ty square. Okay? Just absolutely, absolutely easiest example. Now what you see? So first, you see that if Y, now what holds? So first, what properties? So first, certainly R is finite. A second property, if Y is not zero, so outside of Y equals, Y equals zero, the thing is a second property. Okay. Now let's look at what happens over the axis Y equals to zero. Yeah, you're assuming P is now equal to? P is corrected. I don't care. I don't care. You mean for a tile thing? Well, you compute the differential and the differential will be this you can forget about and here will be Y square times dt plus something and there will be dy and so it's invertible. Okay. Now, but now what happens on the axis? On the axis, it is very strange thing happens. You have dr. You have the map dr and this map does the following. If I consider the vector along the axis dt, then it sends a tangent vector. dt is sent to zero. Okay. You differentiate. And then dy, well, dy will be sent to dy. So it is a very strange map on this axis. It is, again, it's differential along the axis itself equals to zero, but in the normal direction it is a density. Now if you apply this estimate, certainly F itself, it's not local system. It's not smooth iris termification and so on. So therefore, it's singular support. We'll look as follows. So there will be zero section, well, because it's local system outside of the open thing and plus something. And there must be something else because it's not smooth and there's something else come exactly if we apply this estimate. And if you apply this estimate and look at this formula, you immediately see that so it equals x, so the zero section times c, where c is a cone over the axis y equals to zero generated by dt, dx, sorry. And so you see that it's absolutely not Lagrangian. Okay? So that's all for this example. Well. Now maybe I should let me formulate a little extension of the theorem, which is due to Dulin. And that's assertion that if we consider many faults of dimension two surfaces, then absolutely any conical subset of dimension two in the Cartesian bundle can be realized as singular support of some constructible, it's a reducible component of singular support of some constructible sheath. So it's a theorem of non-integrability of characteristics. Well. Now I want to discuss, I will not discuss proofs, so proofs here are not difficult at all. But I want to show a part of the story and this is a part of the story that explains how you see what singular support is. Singular support, for example, it's an interesting invariant. It's, well, you have some conical things, but you don't know how to see them basically because, well, this thing by functions, it's not a pleasant thing to do. But now I will describe how one can actually see singular support and, for example, to see that it has right dimension. Well. So everything, certainly a singular support has local origin. So I can assume that, and also, as I told you that if you embed something by closed embedding, then it transforms in an evident manner. So I can assume that I live on the projective space. And I will use two in order to show how the thing looks like. I will use two tools. One is Brinsk radon transform and the second is veronet embedding. So let me just recall momentarily what the radon transform is. So we have my P. We have the dual projective space and we have the standard correspondence Q. Well, and the radon transform, it is, so maybe before radon transform, then you see that in this standard diagram there is a canonical identification of the projectivizations of a cotangent bundle to P and to P check. Namely, both of them canonically identified with Q. So that's a very classical thing and essentially evident because what is a point in, say, in projectivization of the cotangent bundle to P? Well, it is a point in P and a hyperplane in the tangent space to this point. And certainly, since we live in projective space, a hyperplane in the tangent space extends uniquely to a hyperplane in the whole space passing to point. And so we get a point, projective space and a hyperplane passing through it. And that's an element of Q. That is this identification. Well, same manner here. Well, this thing is called by the way a Legendre transform. And now you have the radon transform. It is a function R from the category of shifts on P to those in P check which is given by this correspondence. Well, it has some wonderful, easy standard properties, but I will not discuss them. Well, they used in the proof, but I will not discuss them now. But one thing that is very easy to check is that this identification, in some sense, it's classical approximation to the radon transform, which means the following. If I have a shift F here, then I can compute, let's take its singular support. That's a cone here. So I can consider the corresponding projectivized. This projectivization, which will just have a variety here. And on the other hand, I can do the same thing for the radon transform of F. Okay. And the claim is that they're the same. That's a simple fact, but I do not have time to describe it. So let me pass to the story that I want to have. Now, certainly just playing with radon transform help you nothing. Well, if you don't know what singular support of a shift is, then you don't know what just radon transform does not help immediately. But it helps after the very nice embedding. So let's do the following. Let's consider an embedding of my projective space. Let's call it now small projective space. A very nice embedding of some degree more than one, any degree. And then I will do radon transform on this larger projective space. So I have P. Let's embed it. So this will be a very nice embedding. Then I consider the larger projective space and the radon transform on this larger space. Well, now notation for suppose that I have a cone C inside of T star P. Well, then I can consider its image by I extended to a cone here. Well, and then I would like to take its projectivization and just notation will be that I will denote it by C in square bracket. And this thing leaves here, so here and here. Okay. Well, that's the first notation. And second, so we have F now which leaves here. And what I want to do is to apply all this functions. So I consider the radon transform of I star F. So this is a sheave here. And let's look at its ramification divisor. And ramification divisor means the following that I just restricted to the generic point. So there I have local system. And then local system extends as a local system, wherever it can extend and where it cannot it's ramification divisor. Ramification divisor of, well, so I consider it, so I need to define it, I need to know the thing at the generic point of my projective space. Good. Now the theorem, no, no, no, no, no, just plain subset. It's subset of dimension one with no multiplicities. F is any sheave on P, on small P. No, but do you take the ramification divisor of the radon transform? Yes. So I, it is not derived, are you okay? I take I star F, then I take R being the radon and not write the right. I'm sorry. There is no right to write fun in this notation. F is a complex of sheaves. F is a complex of sheaves. Okay, well, now the theorem, so informally it is the way how you reconstruct from D, which is somehow visible in variant of F, that you can reconstruct the singular support of F. And it tells you, this in particular tells you that it has right dimension. Okay, so the first assumption is that D itself can be recovered from this thing, namely it's just the image of C in square brackets. Oh, sorry, sorry, sorry. So now from now on, let's put C equal the singular support of F. Okay, then D equals image of. Excuse me, I think this slide is a third line from the bottom. That's the problem. So there are some symbols. So it's I0, C, C, P star, P tilde. No, it's contained. Subset of T star. Subset sign. Subset sign. And then that is P upper star. No, the next. No, the next is projectivization. It's projectivization of the cone. So I have C which lives over small P, the next standard in the standard way of the cone and then I projectivize. Maybe you are too tilde there on your left. Yes, absolutely. Thank you. Well here, well, these notations were without tilde, but okay. Now, so that is the first thing. Second, the second assumption is moreover, so my D has a different reducible come. Components and C has different reducible components. And the claim is that in this manner, they correspond one to another, namely that for every irreducible component D alpha of D, there is a unique irreducible component C alpha of C such that D alpha equals D. So basically that this, there are another thing and then you do Legendre transform and somehow spreads components that could here, they could project to something the same position in P. But when you do the thing, they will project to absolutely different. Yes, that's the effect of verinazine bedding. That's exactly. Pardon? Any of the three more than one. Identity is not allowed. And of course, you shouldn't allow the zero-dimensional objective space. Then I already pointed out that this, then you don't have, probably, well, you don't have any, but then you go to the bedding, so you can have a problem. Zero-dimensional, yes. Okay. Now three is that actually this condition two, it's uniquely defines C alpha. So that C alpha, in fact, C alpha is a unique colon of in T star P of dimension, of dimension N with that property, with property two. Okay. Okay. And maybe I should say also, well, maybe I should say also property four, that the map is projection from C alpha to D alpha is generically redichiled. Do you arrange the small project space on the big one? Pardon? C alpha is in the small or in the... No, no, no, no, no. Oh, I'm sorry. So here should be break it. I'm sorry. And the left? No, here is... Thank you. Okay. So the thing is generically redichiled. In classical situation, sorry, in characteristics hero situation, in fact, this is the map is birational, well, certainly is birational, and also this thing is, well, it sits in the Cartesian bundle, projectivization of the Cartesian bundle to P tilde, yes, and D is divisor there, and this is projectivization of the conormal to the alpha. But in case of characteristic P, in case of finite characteristics, this absolutely does not need to be... It needn't be true, so since it's needn't be Lagrangian, the singular support, well, but somehow you can recover it from the alpha. What about multiplicities? About multiplicities a little bit later. Okay. So, I will... Well, maybe the question is that I would very much want to know how to recover the thing as geometrically as possible. Even in case of projection of surfaces, finite projection of one surface to another, direct image of a constant sheath, so here we were lucky that we could recover it by this stupid situation, but even just iteration of those two things of degree P, it will lend you to a situation that you just cannot recover it from geometry. I cannot, but probably some deeper geometry. Veronaise, just to avoid linear something, things which are linear, which will break... No. If you are sheath at the beginning, there's nothing, no locus of ramification which is linear... No, no, no, no, no, no. You see, if you have identity map, then the thing is just indices by direction between quadratic cones. So if you have absolutely any quadratic cones, it need not have image... Its image need not be divisor, it need not be radiation over its image and so on. You can just take anything and then go back and produce the corresponding sheath. So Veronaise does something very drastically. Okay. Now let's pass to characteristic cycle. Well, at Takesh's work, it is... Well, it's subtle and it uses many other inputs. So this story is pretty rough and elementary. And I cannot discuss it just because of the absence of time, but instead of it, I will try... Well, there is something to be done there yet. It's not all the story. And one thing that comes in Takesh's story is that characteristic cycle in what he can do, it has not integral coefficients, but there are... Could be powers of P in the denominator. And that's for the reason that components of C can have... And be purely inseparable over its image and the multiplicity in the intersection in Milner's formula will have unavoidable powers of P. And so that's one thing that one would like very much to do. Now, what I would like to say is sort of a... Well, maybe hoped for formula that would explain the story. Also, I hope it will provide understanding of things like you can consider for global intersection formula for Zeolier characteristic. You can ask for finer things. For example, to compute the determinant of cohomology. And I would like to have just the story simultaneously and to have definition of characteristic... Some finer definition of characteristic cycle, which would answer also the second question. And that would not involve in itself this... That would be this proof that it does not depend on the choice of functions in Milner formula and so on, but some Helmhilder formula will be just corollary. So let me try to put this in the remaining minutes. Let me try to put it on the blackboard. So first, there is an ocean of... The moment you have the notion of singular support, you have the notion of a micro-local shifts. Well, usual shifts, they live on our space and the category of shifts, the triangulated category, they form a shift of triangulated categories on my space. I can consider for every open and consider the corresponding category. And this will be... So we have D shift of triangulated categories on X. Well, the moment we have the notions of singular support, we can do the following thing. We can consider the Katanjian bundle. Sorry, can you write that again? I can't read what you wrote. We have our manifold X and D is... Well, it's a shift of triangulated categories on X. So I signed to open set, the category of shifts, the triangulated category of shifts on it. But it is not a shift. Well, it's in modern parlance, the triangulated category means whatever infinity index you will put there. Okay. Now the moment I have the notion of singular support, you can micro-localize D over the Katanjian bundle. Well, so consider the Katanjian bundle, but I will consider it not with planes, there is a typology, but only those open subsets which are conical. Well, conical. Well, how would you define it? So if you have an open conical subset in T star X, then we can put D mu of u. This will be the quotient of D of X, monologue the fixed subcategory of those shifts whose singular support lies in the complement of u. So this is a pre-shift of triangulated categories. It has natural structure which has to do with perverse structure here. And you can ask about things like co-dimension three conjecture, but that I don't want to discuss, but just let's consider this data. Now what I want to consider, what sort of a question I want to try to ask is the following. So suppose that my X, I will assume that X is compact from now on. And then we have, consider the functor argama from D of X to just lambda modules. So let the point be KB algebraically closed. And then I can pass to the corresponding map between K theory spectra. And what I want to do, to know is to find this map, this homotopy map of spectra. So K is K theory. So in particular, if I have a sheath, then I have actual sheath. Then I have a, it defines your point here. And so if I know the thing, I will know the homotopy, its image. It will be a homotopy point here. And such a homotopy point defines you, defines you whatever you want. It defines your characteristic if you pass to connected components. If you look at this element in Poincare group poet, it will define you that argama and all the things. So that's actually what we want to have. Well, now let me try to put on the blackboard what I want to. Oh, yes, I will, I will. So I want to, basically I want to have a localization of, so we have this map. And I want to localize it twice. I want to localize it with respect to X. And then I want to micro localize it to the Cartesian bundle. And the claim is that it's all that is needed for the theory of characteristic cycle. So, so let's see. So we have, so we have point X and the Cartesian bundle X. And let me denote this by pi and this will be P. Well, and here on this zero level, we have this story, this map that I want to understand. What does it mean to localize, to localize the thing to X? Well, as I told you, D itself, it forms sheaf of categories, of triangulated categories of on X. And then I can apply to it K. And it's easy to see that I will get a sheaf of X. Sheaf of spectra. Let's call it K of D. That's sheaf of spectra over X. Okay. Now what I want to do is to find first a map. Now it's a map of sheaf of spectra to the following things. So this is, again, this is a spectrum over the point. And I want to consider it's upper shriek pullback to X. So I will say in a moment what it means. Well, such things, well, let's consider usual spectra as part of a native spectra of that is. Yes, yes, yes, yes. Spectra for me is always in the sense of topology. The spectra in the sense of a one of hermotyp theory. So the thing is the thing is the sheaf of the native spectrum over X. And it looks as follows. So if instead of K of lambda, this will be Z. And for example, if lambda is a field, you have map to Z. Then upper shriek pullback looks as follows. So we should take take take motif and then we should shift it by to N and then put it to X. So that will be situation in case of Z. So this will be the thing. Well, now, so what I want to have is to get this basically, basically it should come by a junction. So what sits here is essentially the map from direct image. So X is compact from direct image of this. This is a part of direct image. And you have a map to K of lambda. And I want to have this to get to get such a map by junction. By the way, it is not, well, in usual such thing exists in classical usual topology, but in an algebraic geometry, it's much more interesting. Maybe I will give. Do I have two minutes? Okay. Okay. So we have this picture. So here I will put small question mark. And here there will be larger question mark. The larger question mark is this. So let's consider this arrow. So I assume that it exists. And now consider it's just plain pullback by P. So here we'll get. Now, well, so this shift, this shift of spectra, it has natural map. I recall that there was this D mu and there is the corresponding K shift. Well there is a map from pullback of D to D mu and that's a map on the corresponding K spectra. And now. We are on T star X now. Yes. We are leaving now here. So here we live over the point. Here we live over X. And here we live over T star X. Therefore, maybe let me put question mark on the right. So small and here there will be larger. And the larger is that there is an absolutely canonical map here. Well now, what I know is that if you live in classical, in the situation of Kashivara and Shippara and work with classical topology instead of motifs, then such a construction exists. Well, now I believe that if, well, that it should come if you, you won't understand how this story with singular support actually related to the story with vanishing cycles over multi-dimensional basis. And this map should come by itself. That's for the reason that in Kashivara-Shippara situation it's essentially playing with vanishing cycles over multi-dimensional basis. Well, but some more, some version of it for usual topology. Well, now I would, let me just, so the claim is that when you have such a formula, then you have all things that you wish to have. So for example, you have, so let me just say why vanishing cycles come as a rough, rough, rough, well, how it comes from this picture. And it comes like this. Characteristic cycle, sorry. So we have a shift, so we have a point here. So we have a section here and then we have, it comes from a section of the story. And what does it mean to have a section of the thing? So if we have, if we have a shift that, so the thing and this element, if I restrict it to the complement of the singular support of my shift, it's trivialized. There is a section just vanishes as an element of that version category. And so given such an error, it produces you for every shift, so consider the corresponding section here, and it produces your trivialization of the section when you pull it back to the Cartesian bundle and then restrict to the complement of the singular support. Now let's project the story from k-theta to z, just by the other characteristic map. And here we'll have z of n, and then we'll pull it back here. And then you know that sections of such things are the same as cycles of co-dimension n, actually, as a child group. But if we consider the section supported on some subvariety, this will be the child group of n cycles on this subvariety. And if the subvariety, our singular support has dimension n, this means exactly that we have no multiplicity such as a generic point. So in this manner, this thing immediately, if you replace k-theory by z, it produces you in particular the cycle. And believe me, it also produces you all formulas, the global earlier characteristic formula, and so on. So again, that is my whole, but I need to stop now. I'm going to give you a close, irreducible subvariety. Can you tell me what's the singular support and what's the character of the cycle? If your subvariety is smooth, then it is a conormal bundle, nothing that. If it's not smooth, then nobody knows. It depends on the singularity. What about the thing you had? Can you tell what the species is? No, I cannot. I cannot. I cannot. And it's, well, if it's not complete intersection, then even you don't know if it would be perversive or not. So, I don't know why. So, I don't know why. And then here, I would briefly explain this construction and the characteristics cycle. It is completely right, the contractual formula for it. What do you mean? From this triangle? Okay. Conject your formula for a point. Okay. So, if I have a sheaf on X, so I have a point here, so I will have a section here and just consider its image here. Then since it is the same as if I pass first here to here, this will be a section of the sheaf of spectrum which is trivialized on the complement of the singular support. Just because the image of the section here is trivialized on the complement of the singular support because microlocal sheaf just vanishes outside. Okay. And so, you will have a section of this fellow which is equipped. So, you replace this by Z, yes? Equipped with trivialization on the complement. This means that you have a carmology class with coefficient in the thing with support in the singular support. And this is absolutely the same as giving multiplicities. So, you do actually need the trivialization on the complement to get that? Yes. Or is the fact that it is trivializable there? No. No. It will be an element of the Chao group, of the Chao group of the cycles on. And if you consider a carmology of the thing supported on subvariety, it is the same as Chao group of the subvariety. And since it's the same co-dimension and that this is just a cycle of the generic points, nothing else. So, another thing, I think one thing to explain on this question that if you take F in DF, you see the same. If I take F in DL. Because F is important, it wasn't trivial. Okay. Wonderful, yes. And maybe you can see it here also. Yes. And actually, look somehow very, I think that I should say that there should be a relative picture for morphisms of varieties, which is very much possible. And I would love to know at least how to spell it out, at least conjecture. But again, the support for this is that you have absolutely canonical picture in the setting of real analytic varieties, which is even nice in case of circle. That means that in some sense you are looking for some complex of shift in the cotangent space supported by the singular support. Which reflects the vanishing cycle. It's not complex of shifts. It is a section of the spectrum. No, no, okay. But you are looking for a substitute. Yes, for a substitute, yes, exactly. But on the smooth part of the... No, on the whole thing. On the whole thing. On the smooth part, it is just the local system that you can in some sense see with looking the vanishing cycle. Yes, K-class. It's K-class, very probably, yes. But with the micro-local things, the Japanese team in Kashiwara, they thought they could not produce something. Some module micro-local, I don't know. Probably they produce, but I believe they don't understand it at singular points. And I would like that the theory will be as rough as you don't... You want to leave everywhere. Yes, yes. I wanted to leave everywhere since probably you need to know it everywhere to have actual picture on the level of whole K theory. Probably. Not for earlier characteristics, you need to know it at the generic points only. Not for subtler. Any more questions? So when you consider this D-Apel Mu on the cotangent on Kispar X-Mode GM, this was for the Zariski topology. So several related questions. One thing is that you use the word ship of triangular categories. So I imagine that this means there is a patching result when you work in some higher context with you. Yes, of infinity. It's an stable infinity. K-class. Then it's a ship, yes. And then in the analytic case, what do you do? You don't have... If you just do analytic topology, how... What do you... Absolutely, same thing. But you don't have enough... You're not seeing the singular support in the... Do you do it for complex analytic or real analytic? Real analytic. You do it for real analytic thing. And then you have enough things with... Then you have enough, yes. I mean, all of them sit in some conormals. And it's sort of a funny ship, every section it's supported on... Well, it's supported in codimensionean. So... And what was the conjecture that you mentioned there, where you spoke about this, you said there is a T structure and some conjecture if it didn't stay? Ah, okay. So that's just the words. So as I told you that every object here, just if you start with a ship, then it's a section of D mu as a microlocally, it's supported on a singular support, yes. And now what... Now suppose that you're playing not with triangulated categories, but with perverse ship with the heart. So let's consider perverse ship. Now, the following... So the thing, again, so it's supported in codimensionean. Now what you want to do is that the function of restriction outside of codimensionean plus one of these categories will be a faithful, restriction to codimensionean, yes, and plus two it will be fully faithful. And then one dimension less, it's an equivalence of categories. So that means that if I have a microlocal perverse ship, then on the whole contingent bundle or on some domain, that it is the same as microlocal perverse ship on its open subset obtained by removing as many points of good dimension and plus three as you want. And plus three, I think, or at plus four I don't... This is for the analytic topology? Yes. This is a very old conjecture in the context of demodules or maybe demodules with regular singularities that was proved fairly recently, maybe three years ago by Kashiwara and Villanine. They have... In the archive, it's called something called dimension three conjecture and so on. But amazingly, they cannot prove... They prove it actually for regular demodules or perverse ships, but with K-efficients in the field of characteristic zero for perverse ship with Zimotelka efficiency, they don't know how to prove it. I mean, the proof is analytic. No, but what is the statement? Because you define this really called sheet of primary category. So you don't take the quotient of dx by something and since the singular support is also purely of dimension something, it does not make sense. I mean... No. Look, so what happened? So suppose that you have perverse ship. Yes? Look at it as a generic point of the singular support. There you have some data, some categories that it would be nice to describe. Now your ship, if your ship is nonzero, then this data is nonzero, definitely, that you know. Now, you want to reconstruct your ship from this data. What you should do, you should add to it some data, add... Could you mention one more, some sort of a gluing data? If two components intersect by something of dimension n minus one, then you should add there some sort of a gluing data. Now... And this gluing data, they think plus gluing data uniquely defines your ship. But in order to reconstruct the ship just from this gluing data, you should have compatibility, which will sit in one co-dimension more, a generic point on one co-dimension more. And the moment you have it, you don't need to go any deeper. No, but once you define this dimu of u, the dx-module of the six-up category of singular support in the complement of u, when the complement of u is as large co-dimension, more than the middle dimension... No, no. The thing is that you shiftify. You shiftify the story. So when you define it using the squashions, you get a pre-shift of categories there. Now you consider the true shift. And for this true shift, in principle, it could have sort of a gluing data which lives deeper and deeper. Ah, you didn't say that this is... Ah, you have to shiftify. Yes. Sorry. That was overprecision. So I'm saying so vague things. Thanks, Sasha, again. Thank you.
I will discuss some recent results of Takeshi Saito and of myself that extend the theory of Kashiwara and Schapira to algebraic varieties over a field of arbitrary characteristic.
10.5446/20230 (DOI)
So first of all I should say that almost everything in the first half of this talk is in standard textbooks in quantum field theory. I would explain how to assign integrals to graphs and in the second half I'll say give a sort of informal survey of what is known and what is not known about these integrals. The problem in this subject is knowing where to start. So typically a quantum field theory course will talk about the Feynman-Parth integral up to a certain point and then by analogy they'll make some noises about wick expansion, expansion in graphs and then sort of start again perturbatively and there's a sort of leap of faith that connects the two parts. So what I will do is start the story then with perturbation theory that seems as good a place as any. So as I just said we want to put graphs at the center of the story. So in perturbative quantum field theory interactions between graphs are represented, sorry not a good start, interactions between particles are represented by graphs. Always feel slightly awkward as a mathematician talking about these things. Here's an example of a graph representing two particles scattering and exchanging a photon. Here's for example another graph, the same two particles exchange a photon which spontaneously becomes an electron-positon pair and later which later annihilate the form of photon again. So every graph that we'll draw like this represents some kind of interaction between fundamental particles and the game is that to every such graph physicists associate an amplitude, i.g. which is given by some integral and it's going to be a function of certain data which are masses and momenta of particles. That's a function and the final answer that you compare with experiment obtained by summing over all graphs. And of course there are infinitely many diagrams of this type which represent the same outcome, the same observed outcome. And so one has to do something. You can only in practice some over finitely many graphs and so at this point you cross your fingers and hope that somehow this expansion converges which it doesn't. So I just, I'm not going to say anything about this, there's some very difficult unsolved problems relating to this but we have to, we run into some divergent series and we don't worry too much about that. Practice it. This all seems to work. Okay, so today to make some simplifications, G, I work in a scalar field theory and I'm going to work in Euclidean as opposed to Minkowski spacetime. So spacetime will be R to the D where the dimension will be, is a positive even integer. So it's important that it's even and so we'll work with a Euclidean norm. So X squared is the sum of the square of its components of X equals, okay. And so the challenge that I want to bear in mind throughout this talk is how we think about amplitudes. So how do we make sense of the zoo of amplitudes high G? So this won't make much sense now but once, when we've seen some examples, I want you to think about how, whether the problem is the challenges to try and find some order in this huge morass of examples. So of course I should say that calculating these, these Feynman amplitudes is big business. Nearly all the predictions for particle collider experiments are obtained by calculating amplitudes high G and comparing them with experiment. So there are, I don't know how many people but many teams around the world, banks of super computers running night and day calculating amplitudes. This is, and if you open any particle physics phenomenology paper, journal, it will be filled with huge expressions for Feynman amplitudes. So this is a very important problem and we are very far from having a mathematical theory of amplitudes. Okay, so let's actually begin. A Feynman graph, for me then, will be a graph. So a graph has a certain number, certain set of vertices, certain set of internal edges and a set of external edges. Okay, so this is a graph that satisfies the usual rules and the internal edges, each edge is connected to two vertices. So this is just a subset of pairs of vertices. The external edges on the other hand are connected to a single vertex. We represent that by a map from external edges to the vertex to which they are attached. So here's an example of a Feynman graph. Just a triangle has three vertices, three internal edges, three external edges. For convenience I'll number the edges for later use, one, two, three. And the external edges represent particles which I will typically represent as incoming but not always in fact. Okay, so Feynman graph also has some extra data, namely the data of... No, it's not an infusion. Ah, yes, thank you. So the graph is not necessarily oriented. No, but I'm good at orientating just a minute to write down the integral, Feynman integral but it won't depend on the orientation. So the graph also has an extra data of a mass, a particle mass Me which is a real number. If you're a paranoid you can take it to be positive but as we'll see it's only the squares of the masses that come into the formulae. So there's a mass for every internal edge is to every external edge we have a momentum, qi which is a d vector for every external edge. So here let's put q2 here, q1 and q3. So the convention will be almost always, sometimes I might change this, the convention will always be to have momenta coming inwards if you reverse the... If you want to represent an outgoing particle that's the same as an ingoing particle with a negative momentum. So we can always assume that all the particles are incoming if we allow the qi's to be negative. So that's totally fine. No vectors, yes. So that's a very small detail. The next remark is that this data is subject to momentum conservation and momentum conservation sum qi equals zero for all external edges. Okay. So now we want to write down an integral associated to such a Feynman graph. So Feynman graph is the graph with this data. And to do that we need some integration variables. So to every edge, e, internal edge, actually, yep, to every internal edge we assign a momentum variable, momentum variable ke, which is a d vector. Now sort of temporarily I just, I'm going to choose an orientation on the graph just to write down the integral but it won't depend on this. The integral won't depend on this orientation. So choose an orientation, you'll see why in a minute. And let me write p e to be either ke if e is an internal edge and the incoming momentum if e is external edge. And then the Feynman integral in momentum space form is, so this depends on the external momentum qi and the particle masses me. So it's an integral of real space of d dimensions, product of internal edges, default integral. So remember ke is a d vector, so integrate, this is the component by component. And here in the denominator we have ke squared plus me squared times the product of all vertices and here a delta function p e where the sum is over all edges both internal and external which meet the vertex v. So this makes sense because you've chosen the orientation, the meaning of this, this, oh thank you very much, r d to, let me give that a name now actually, n g, thank you, equals the number of e n g. Yeah, thank you, so this is r d times n d. So this factor here represents, if you like, momentum conservation at every vertex so that the sum of the momenta at every vertex sums to zero. Okay, so this of course, so this is the momentum space Feynman integral as it's most commonly seen. Of course, it doesn't necessarily make sense, it may diverge. Sorry, a small d, oh this is delta, delta function. D, sorry d as in, yeah, delta is the, is the Dirac delta function. Okay, so yeah, mathematicians tend to prefer a, sorry, yeah, you're assuming that at each vertex your convention is such that for momentum are either in constant. No, I'm going to chew, well you could do that, you could do that, but I, what I've, I'll show you in an example just a minute, but I said we choose an orientation on g and then when we assign a momentum to every edge, so we'll have, let's do it, we'll have k1, k2 and k3 and so at this vertex we'll get the equation k2 plus k3 equals q1 and then at this vertex we'll get k1 plus q3 equals k2 and at this vertex we'll get k1 plus k3 equals q2. So, so this delta function means integrate over, over this region. Okay. So it's sum of plus minus, yeah. I found it's because if you choose a orientation. Yeah, I, I, it's clear what it means, right? Yeah. You change the variation, you change the, you can do the opposite. Yeah, it doesn't change. No, but street is the, what? This, this integral doesn't make, I have to say this integral doesn't make a huge amount of sense in the first place and that's why the first thing I'm going to do is change it into an integral which doesn't make sense. So this is what you see in, in physics textbooks I've kept it exactly as it's written. And exactly as I'm going to say is that, that in fact in the, in the good old days it was common practice to use parametric representations, that's classical and in fact it's after a long hiatus it's now coming back into fashion. So classical and modern approaches use the parametric form, the parametric representation which is much more satisfactory from many points of view. So I'm not going to derive that from this and all should start to make sense. Yeah. Do you need to introduce coupling constant for each vertex? Do I need to introduce what? Coupling constant. Yeah, but I'm working in a very, in principle yes. Absolutely. But I'm working in a very simplified, this is a simplified theory where I can just take it to be one. I could put a parameter. You can, yeah. But I'm only going to consider one graph at a time so we know how many vertices it has. Doesn't make much difference, yeah. Of course in a, in a genuine quantum field theory the, the true final rules are much, much more complicated. But at the end I'll say something very briefly about how that changes. Just to make the idea as a, I mean the, the sum of the idea as a formal power series. Yeah, I think of it as a formal power series exactly here. Okay, so to get from this to parametric form, we first apply the Schringer trick. So this is the identity 1 over x equals into the goal of e to the minus alpha x dx, I had d alpha. So this is valid for x positive. So the idea is to introduce a new variable alpha e for every edge, every internal edge. And then write, and in this integral replace this factor here with integral from zero to infinity e to the minus. Okay so it's something, it looks as if we're going in the wrong direction instead of doing the integrals, we're adding more integrals. But as we'll see where there's a lot to be gained from this. Why am I kidding? So what happens to this integral, it becomes, it will now be the integral, a bunch of integrals from zero to infinity for every alpha parameter assigned to every edge. R to the d and g x minus sum e k, oops, k squared plus me squared alpha e. And then the same stuff, that's the same as before, I won't write it out again. And product d alpha e. Right, so we made the integral more complicated apparently by introducing new variables. But now the point is that we can actually do the momentum integrals now. So this involves Gauss's formula for this integral over the reels, e to the minus pi x squared dx equals one. So you can multiply integrals of this form together and to get a higher dimensional version. And in general, the d dimensional analog of Gauss's integral is as follows. If we have q of x is minus x transpose A x plus 2 b where, so here x is an rd, A is a symmetric positive definite matrix, and b is any vector in r to the d. So this is a nice quadratic form. Then the integral over r to the d, e to the q x dx. Oh, sorry, thank you very much, b scalar x. Let me put b transpose, yeah, thank you, sorry. Of course, if we add a constant, this will come up later, if we add a constant plus c here, it's just going to factor straight out of the integral because it does not depend on x. Okay, yeah, thank you for that. So the upshot is that this is pi to the d over 2 over the square root of the determinant of A times e to the b transpose A inverse b. Right, so, and this can easily be deduced from the previous integral by diagonalizing the matrix A. And it'll break into a product of d copies of this integral after a change of variables. Right, so now the idea is to apply this identity to the integral at the top and to actually do the ke integrals in this expression in the above. And the upshot will be that we will obtain a new integral only in, there'll no longer be any k's, there'll only be the alpha's, the masses and the qi, well only the alpha's, the m's and the q's. Yeah. Okay, so this can be done in general, but it's quite tedious. So I will just do illustrate on one example and then state the general answer. Okay, so let's do a very simple example. So let's take this graph. We assign momentum k1 and k2 to these two internal edges. So if you prefer we can, if I should stick to my conventions, minus q. So the delta function in the integral is momentum conservation at each vertex that reduces, they're both the same condition, so it reduces to the equation k1 plus k2 equals q. So if you like that's our domain of integration. So now I'm going to write down the exponent, so the exponent in that integral, so it was minus k squared e plus m e squared alpha e. So what is it in this case? It is alpha 1 k1 squared plus m1 squared plus alpha 2. Okay, so the first thing to do is to substitute in the momentum conservation condition. So this is alpha 1 k1 squared plus alpha 2 q minus k1 squared, this time on this term. And then the mass terms are going to accumulate here. So then the first thing you have to do is to take this part and complete the square. Of course you can always do this. In this case it gives the main coefficient is alpha 1 plus alpha 2 and then times k1 minus q alpha 2 alpha 2 squared. And now you need to add q squared, sorry this is a plus, let me write, get rid of the minus, that's better, over alpha 1 plus alpha 2. Okay, so we've completed the square. So let's write this. So the next thing is to change variables, or the shift variables. I can call this k. So let's put k equals k1 minus q alpha 2 over alpha 1 plus alpha 2 and then this is just alpha 1 plus alpha 2 k squared plus the rest. And changing variable, of course the we're integrating over r to the d, so the integral is translation invariant. And from this representation we can then plug in Gauss's formula and write down the answer. So what we get is, so this is just the term in the exponential, we have exponential of minus this integrated over k. All the stuff on the right just does not depend on k at all, it's just going to factor out of the integral. And we just have to integrate e to the minus alpha 1 plus alpha 2 k squared. And that's easy using this formula. So I'll just write down the answer. We get pi to the d over 2 integral from 0 to infinity d alpha 1 to alpha 2. The alpha 1 plus alpha 2 comes downstairs. And then all this stuff over here is unchanged, it's factored out of the integral in the first place. Okay. So in the general case, so you can, this works in general. You can complete the square and put in the integrals. It's a bit of bookkeeping to work out what the answer is. At the end you get a, here you get the determinant of a certain matrix. And then you have to apply something called a matrix tree theorem to express that determinant in terms of some graph-threatic quantities. But that's rather straightforward. And the final answer is that the, we get that the integral is some trivial factors which I'm going to ignore. From powers of pi and so on times the integral from 0 to infinity. So here the product is over all internal edges. Then here you get a certain polynomial, or psi g. And then you get exponential minus pi g q over psi g minus the sum. So that is the parametric form of the Feynman integral. And I now need to explain to you what these psi and phi are. So psi g and phi g q are what are called semantic polynomials. Okay. Graph polynomials. So first of all psi g. So psi g is a polynomial first discovered by Kirchhoff I think in 1853. So it's a polynomial in just the alpha parameters. And depends in no way on the external momentum or the masses. No dependence on qi. It's an important fact that it has integer coefficients. But that won't play any role in what I say today. So here's a formula for it. Actually there are many different ways to interpret this graph polynomial. I'll just give one. It is the sum of spanning trees in the graph. And then you take the product over the edge variables not in each spanning tree. So t is a spanning tree. So t is a spanning tree. t is connected. Simply connected. So that's it. That means that it's a tree. And spanning means that it meets every vertex of the graph g. Or perhaps I should say, I should actually say my apologies. Here g is, I'm assuming g is connected. Throughout this I'm assuming g is connected. I forgot to say that. Otherwise this is going to vanish. So let's do an example. Actually, the best of probably to do the example I did earlier. So let's do this example. It's very simple. There are only two spanning trees. There's one. It's a tree which meets every vertex. And there's two. And so the graph polynomial is the product of all the edges not in each spanning tree. So there's only one in this case. And it gives alpha two plus alpha one. And alpha two plus alpha one is indeed the term that we found in the denominator here. You should say that tree doesn't contain an external edge. Yes, yes. I'm sure you're right. So I put no dependence on the QIs. Yeah, spanning tree. So think of t as a subset of the internal edges. Yeah, so the external momentum plate, the external legs play no role in this at all. We just ignore them. Perhaps I should do another example. So this graph which we had earlier, the spanning trees are one, three. It's going to be linear again. That's not a good example. I don't know. Let's do this example for later on. So what are the spanning trees here? It's the same one, two, and three. And so psi g is alpha two, alpha three plus alpha one plus alpha one, alpha two. Okay. So that's one of the terms in the integral. And then we want the second semantic polynomial, phi g. Now now it does depend on the external momenta that I represent just by the letter Q. So it's a polynomial again with integer coefficients in the alphas. And in fact in dot products scale products of all the external momenta. So the formula is that's minus the sum of spanning two trees. And so now you take Q to the t one. So the dot product is the Euclidean dot product. And then here you have the product of the edges not in t alpha e. So what is the spanning two tree? So t spanning two tree if and only if. So it's a forest with two components. So t has exactly two components t one and t two. It's simply connected. Each component is simply connected. And it spans. So spans means that it meets every vertex of the graph. Oh yeah. And then so what is Q to the t one? Q to the t two. So Q to the t i equals total the sum of all momenta entering the tree t i total momentum entering t i. Okay. So an example. So let's do the example we did earlier.. So spanning two trees. Well there's only one. It's this. It's the only spanning two tree t one and t two. And so we get minus Q dot minus Q times the product of all the absent edges. That's alpha one alpha two. And that is exactly Q squared alpha one alpha two. Which was exactly hopefully the term up there minus alpha one alpha two Q squared. It's exactly the same polynomial here. No I didn't get minus sign. Sorry? It's something like this. No no no because it's polynomial skewed. Yeah I put a minus sign here. Maybe you didn't see it here. There's a minus sign here. Yeah yeah. Yeah so the two minus is cancelled to give a plus. Okay let's do a more interesting example. The triangle graph we had earlier. No no I think it's fine. This is the definition of phi. But this is minus phi. So here so here. So this is minus phi. So it's minus alpha one alpha two Q squared from that blackboard over psi which is alpha one plus alpha two. It's exactly this term. Two times minus is plus yeah. So I think it's great. So there's a minus here. So often I don't put this in the formula. I write it differently. There's a minus here and I think it all works. And there's a minus here and then there's another minus here. Three minuses. There is a minus here on the left. No no so this dot product always produces a minus. So this is a minus. Because Q2 power Q2 is minus Q2 power Q1. Yeah thank you. Yeah I was just about to say that. So Qt one plus Qt two is zero by momentum conservation. So the way I normally write this is plus Qt one squared and that equals plus Qt two squared. But if you write it this way it's not clear it's symmetric in t1 and t2. But if you write it this way it's obviously symmetric. But the price you pay is that you need a minus sign. But it's not this minus sign isn't that this is completely positive. This has entirely positive coefficients. So this minus gets absorbed in the stock. Absolutely yeah. Well otherwise t would have a single connected component. It's going to be connected. So one I can't remember how I labeled this but up to rotation it's this. So what are the spanning two trees? Here they are. So let me label them Q1. This is edge one Q3 Q2. That's one spanning tree. Q2 two. So what are the two trees? So what are the two trees? So what are the two trees? Q3 Q1 and then the last one Q3. Okay so then so what do we do? We take the dot product of the moment entering the first tree. So it's minus Q1 dot Q2 plus Q3 and we take the product of the edges which are not present. So that's our first tree. So what do we do? We take the moment entering the first tree. So it's minus Q1 dot Q2 plus Q3 and we take the product of the edges which are not present. So that's alpha two alpha three. Same here. So minus Q2 dot Q1 plus Q3 alpha one alpha three and the last one gives minus Q3 dot Q1 plus Q2 alpha one alpha two. And then of course by momentum conservation which I'll write up here. Q1 plus Q2 plus Q3 is zero. So this is just equal to Q1 squared alpha two alpha three. Q2 squared alpha one alpha three plus Q3 squared alpha one alpha two. And of course in the final answer all the coefficients are positive. That's actually very important. And that's because we're working in Euclidean space and that's what guarantees that these integrals make sense somewhere for at least some values of Q and M. And so these changes of variable formula and so on will be justified in some region of Q and M and for some value of D not necessary which may be a real number as I'll explain in a minute. So that's very important that these polynomials should have all positive coefficients. Okay so oh yeah let me write. So this is not a standard notation but I like to package this, give it a symbol and consider this polynomial phi GQ plus sum M E squared alpha E. So that's not a standard notation. And now. Thank you very much. Yeah thank you. Thank you. Thank you. Thank you very much. Yeah absolutely. So psi G is homogeneous of degree HG. So let me write HG is what this is called the number of loops of the graph. So it's the dimension of H1 of the graph. And on the other hand so phi GQ and hence psi GQ comma M is the number of loops of the graph. So that's the number of loops of the graph. So that's the number of loops of the graph of H1 of the graph. And on the other hand so phi GQ and hence psi GQ comma M are both homogeneous of course in the alphas they depend on other things as well but of degree exactly one more HG plus one. So that follows from some combinatorial property of spanning trees. It's very easy to check. Okay. Right so now we want to get to projective integrals. We want to get actually so there are two things you can do at this stage. We have these Feynman integrals in these form as an exponential integral. You can either do what's called a minimal subtraction which I somehow prefer but you can also do a little bit of a subtraction. So we have a little bit of a subtraction. We have a little bit of a subtraction which we can do as an exponential integral. We can do a minimal subtraction which I somehow prefer but we can also do a dimensional regularization. And the latter is slightly quicker and gets us straight to the arta so I'm going to do that. And so the And so what the basic idea is that we want to get rid of this exponential factor. So there's one that can be done by doing one more integration. So recall that we had i integral from 0 to infinity. So let me rewrite it in the new notation, x over psi g. Now change variables. So write alpha e equals lambda beta e, where the sum of the beta e is equal to 1. Lambda is a positive real number. And then this differential form here, product d alpha e can be written. So we call that ng is the number of internal edges, d lambda wedge omega g, where omega g equals sum minus 1 to the i. Okay, so that's a simple manipulation. And so this can be rewritten in the region where the beta is all positive. And of course, don't forget that the sum is equal to 1 of omega g over psi g. So now I view psi g as a function of the betas. It's homogeneous. So but I'm not going to write beta e in here because it will clutter the integral. So this clearly means it's viewed as a function of the betas instead of the alphas. And here we get e to the minus lambda. So from here over here, because of the degree of the psi polynomial is exactly one more than the degree of psi, when you scale all the alpha parameters by lambda, it will produce exactly one lambda coming out because the degree here is exactly one more than the degree there. So we get lambda to the minus, to the ng minus d over 2 hg, d lambda over lambda. So this integral here we can do, and it's the final step, the final integral that we need to do. And it produces a gamma value, gamma of what I'm going to call SDG times psi g over psi g qm to the minus SDG, where, so SDG is essentially this quantity which pops out, which is minus the number of edges plus d over 2 times the number of loops. And this is called, this is going to measure something about the convergence of this integral. And so it is called the superficial degree of divergence of the graph. It can be positive or negative. So essentially when it's positive, this integral is going to diverge, and when it's negative it's going to converge. And it's called, the graph is called logarithmic, overall logarithmic divergent when it equals zero. Yeah, so everything's a function of beta, but I don't want to write beta everywhere because it will become very cluttered. But from now on, let me write the next integral and everything is a function of beta. So the conclusion of all this is that we can write our Feynman integral up to maybe some trivial factors in the form. So we're going to acquire a gamma function, an integral over this hyperplane. But in fact, you notice that the integrand that comes out is actually homogeneous of degrees zero. So it's slightly nicer to write this as an integral over a projective space. So this is the sign of the argument of gamma. You want the opposite side. You want the superficial degree of divergence. No, I think this is correct. So it's superficial degree of divergence. So this is very positive when the graph is divergent. So this should be negative. Yes, it is. It's not what you wrote. Because you wrote gamma of SDG. Oh, sorry. Yeah, yeah. Thank you. Yeah, this is correct and that is not correct. Yeah, yeah. Okay, thank you. Yeah, so it measures how much it diverges. When it's very big, it means we're going to get a pole here, if that's the point. So we get psi g d over 2 psi g over psi g qm to the power of. So I didn't put superficial degree of divergence here because I want to stare at it in a minute. So this is a projective integral. It's not all projective space, but it's simple. Oh, yeah, sorry. Thank you. Sigma equals the set beta 1. Thank you. So this is a projective integral. And it makes sense if you plug in these degrees here, the integrand is homogeneous of degree 0. Okay. Okay, so now the problem is, of course, that this can have a pole. This gamma factor can have a pole, particularly when the superficial degree of divergence is non-negative. So very briefly, the idea of dimensional regularization. So the idea is that you replace, and this is what is done the most commonly in practice, though I'm not a huge fan myself, is to replace d with d minus epsilon, some small positive epsilon, and perform a Taylor expansion in epsilon. And so what physicists do in practice is they compute the coefficients of epsilon to the i, where i can be both positive and negative. Excuse me. So you can view it as an integral of this hyperplane if you prefer. But the point is that you could have chosen any other hyperplane. We didn't have to choose wherever it is, the sum of the beta i's equals 1. You could choose any other hyperplane. It still works. And this, if you like, is a nice way to codify that. What does a projective integral mean? It means you restrict to an affine chart and calculate the integral on that affine piece. And that's exactly the same as choosing a hyperplane at infinity and integrating over affine real space. So for physicists, this is a way to think of this in integral over r to the ng with a delta function given by this equation. And it contains the fact that you can replace this with any hyperplane, any non-trivial hyperplane. Okay, so that's the end of the first half. As I said, almost everything is in textbooks. So I was informed that instead of having a pause, I should just press on. And so now in the second half, I want to give a sort of panorama of what we know about these integrals. In this way, dimensional regularization doesn't need to invent a defensive geometry in the end. No, absolutely not. No, no, no. Absolutely not. No, absolutely not. That's the point. That's the key point. Okay, so a panorama. Okay, so what do we know about fine integrals? Of course, it's impossible for me to give a completely exhaustive list, but let me try to give a picture of the landscape. So the first remark is that the general one loop diagram. Oh, sorry, let me make a remark before I do this remark. So all these, this procedure, the Schringer trick in Gaussian integrals and all this works in much greater generality. It works for a gauge theory. And the upshot is that we can always write fine integrals in the general form. So what will happen is if you do this, is you get higher, you maybe get different powers of psi and psi in the denominator. So here A and B are z integers and P alpha is some polynomial in the alpha E's. Maybe it has coefficients in some Clifford algebra. And the whole integrand is homogeneous, such that this is homogeneous at a degree zero. So that's the general form of a parametric form. Of course, the P could be extremely complicated. So back to these scalar diagrams. The one loop diagram can be expressed using a single function, well, two functions rather. The dialog with them is defined by the following sum. n of n squared converges for mod x less than or equal to one and has an analytic continuation to C, so multi-valued function, and the logarithm. So any graph like this, you can always express the amplitude using just l2 and log, where the x, it'll have a complicated expression in these with different arguments x, but where the x will be some function of the masses and momenta in some possibly quite complicated way. In any case, there's a single function essentially that describes all one loop amplitudes. It was the initial discovery of Feynman, Long-Long with Lueckum. Feynman introduced the dialog with Lueckum and Lueckum. I'm skeptical. We can discuss this. So the general two-loop diagram, two-loop amplitude is not known. So there isn't a function that you can look up in the canon of special functions. It means, this also means that the singularities and the analytic behavior of the general two-loop amplitude is not understood. But does it reduce the functions like two variables or three variables? So here there's a theorem that if you take an arbitrary large number of external legs, it can always reduce to at most a certain number of legs. In two-loop, the corresponding result is not known. But I think it should follow immediately if one knew something about, from Hodge theory, about weights. So it should follow. No, no, no. Yeah. So it typically involves square roots of a quadratic form in the masses and momenta. So even in a simple example like the triangle, it's, in general, it's a long formula. Let's just not write it down. It's quite complicated. It's not something. So two-loop amplitude is not known in general. There's a special case that's become very fashionable at the moment that's actually been studied for many years. So it's called the Sunset or for optimists, it's the sunrise diagram. So this is a particular two-loop diagram. So of course, faced with these difficulties, what people typically do is make some assumption that certain masses are zero and so on and so forth. This has a long history and it was recently solved for the general case with all non-zero masses and momenta. And the key names are Bloch, Vanhover and McCurr and Adams, Bogner and Weintzeel. So they have very different approaches, but these have the sort of final word on the status of this integral. I recommend, if you want to know about history, you can look at these papers. So I forgot to say something. The dialog is an iterated integral. So I can write it in a very simple integral form, zero less than t1 less than t2 less than x. So let me take x between zero and one for the sake of argument. Then it can be written as an integral just involving two differential forms. And so this is an example of an iterated integral on c minus zero and one. And for some bizarre reason, it's not fully understood, this space seems to, the functions on this space, describes a vast number of Feynman amplitudes for reasons which are not completely clear. This in fact is an exception, but what it does involve are iterated, or rather twice, in twice iterated elliptic integrals on some family of elliptic curves. And one way to write down, to get at the amplitude is by average, is this function and in z. So here q has nothing to do with this q here. So q is, let me call it q naught, q nothing, is nothing to do with the q here. This is some function, it doesn't make sense, it's for many reasons, first of all, we're averaging a multi-valued function, and it's going to have singularities when n is either very big or very small depending on this. So you have to regularize it in some way, but if you do that you get some function, and this function essentially describes the amplitude here completely, where q naught here will be some function of the masses and q. So I won't say more about that. Next there is a, so this is the first sort of difficult case. A huge class of amplitudes are no, and are expressible, using a finite class of functions called multiple polylogarithms, about which I think we'll hear a lot more this afternoon. So part of my job is to set up the afternoon speakers. So here's the definition of the multiple polylogarithm, some generalization of the dialog with them. So it depends on r variables, and it is the sum 1 less than or equal to kr. x1 to the k1, xr to the kr. So that defines some analytic function on the region where the x i's, let's say, are strictly less than 1. And by analytic continuation it extends to a multivalued function on c to the r minus, and then you have to take out a certain class of diagonals, given by the successive products. So this is actually c, c minus, c minus the origin to the r minus, this class of diagonals. And this is nothing other than the moduli space of r minus 3 points on a sphere. So we think of this as r minus 3 particles on a sphere. And it's not entirely clear why a priori. Oh yeah, thank you very much. So this is just consecutive. Oh yeah, r plus 3, thank you. Absolutely, the other points are 0, 1 and infinity. Okay. So for some reason this particular moduli space, which in the case r equals 1, is just c minus 0, 1. It plays a special role, and it's not understood what the analog should be to describe all higher order Feynman integrals, if there indeed is such a thing. Okay, so something which is used a lot in physics, which I'll emphasize, but it is not valid in general at all. So I'm going to explain what it is and then say why one shouldn't extrapolate beyond its range of legitimacy, is that first of all they are iterated integrals. There's no reason to think that general Feynman integrals should be iterated integrals. And there's something very special that they have a weight grading. So what that means is that the weight of, you can attach a weight to such a function, it's the sum of the indices, and the property of this weight is that if you differentiate the multiple polyloga of a certain weight, and there's an explicit formula for this, I just am not going to write it now, but I'm going to write this as a linear combination of functions of lower weight, again of multiple polyloga of strictly lower weight, in fact weight one less. So there's some recursive structure in these functions, and this weight, if you keep differentiating in a certain way you get to weight zero, and that reflects the fact that iterated integrals. But I really want to emphasize because this is often expected, in physics literature it's said that this holds in general, it's not true, this is a very special property of this particular class of functions. Next, so the sunrise is not of this type, it's something else, there should exist a class. I will leave that to the speakers this afternoon, I believe that that will be discussed in detail. So yeah, I'm just really setting up the afternoon talks. There should exist, so there is a well-defined class that we understand, infinite families of graphs of this type. Beyond that, there should exist a class of multiple elliptic type, such that IG is multiple elliptic. So this is a sort of nascent theory, one of two papers defining what such functions are. So this is not known, it's not been worked out at all. So the first class beyond the polylogarithmic is completely unknown, say for one or two examples like this, maybe that and one other. And... No, so it would mean something like this lead to, it would mean sort of forced it to be elliptic on the Jacobi uniformization of the elliptic curve by averaging over Q. You can do the same thing for multiple polylogarithms, it's slightly more subtle, and that's a good, that's a perfectly valid definition of a multiple elliptic polylogarithm. Alternatively, you could look at iterated integrals on the universal elliptic curve. That's a whole class of functions, that's in some sense the genus one analog of this story, and they should describe another large class of diagrams, and that class is not known. And so the boundary, again the boundary where the polylogarithmic ends, is not known. Something is known about it, but it's not known precisely. Okay, so now let me focus on some more specific examples. I'm sure I have interest in number theorists. So I have half an hour, okay. So typically these examples are very difficult because there are lots of masses and momenta, and the amplitude is a function of many complex variables, and that's where a lot of the difficulty comes from. But often in practice you don't need such generality, and the things get easier if you make some restrictions on the momenta and the masses. So the other extreme we have single scale processes. So these are where i,g,q,m is a function of a single variable. So it could be, for example, all the masses are zero, and there's just one incoming momentum, or it could be where you just have one non-trivial mass. And what typically happens is that the dependence on that variable is completely trivial. And it will just factor out of the integral, out of the integrand. So what you're already saying is that the amplitudes are giving numbers instead of functions. So it factors out. And the coefficient is an interesting number. So what can we say about the numbers that come out of quantum field theories? So here's one family of examples, which are in some sense the most exotic. So it's the analog of this sunrise. It's diametrically opposed to the polylogarithmic class where we feel we understand quite a lot. So these are the banana graphs, which have a lot of symmetry and are very interesting. And these were first studied by David Broadhurst. So let me do one example in complete detail. So we have the same picture like this. And what we're going to do is impose, first of all, let me set all the masses equal to the same mass. Any number of masses, or just three? Just one here. So I'm going to put all masses equal. And I'm going to call it equal to n. Now, with the n1 and 2 and 3, all of them, n4 and 5. The bananas will be this family of graphs here. That's a banana. I don't know why it's called a banana. It looks nothing like a banana. Yeah, sunrise, banana. So the graph polynomial is this, alpha 1, alpha 2, plus alpha 1, alpha 3, plus alpha 2, alpha 3. The second graph polynomial, so these are all good exercises to revise the definitions of the graph polynomials I gave earlier. This is q squared, alpha 1, alpha 2, alpha 3. And this polynomial psi gives this equation, q squared, alpha 1, alpha 2, alpha 3, plus m squared, alpha 1, plus alpha 2, plus alpha 3, alpha 1, alpha 2, plus alpha 1, alpha 3, plus alpha 2, alpha 3, which is very nice and symmetric. And defines a beautiful family of qubits. Okay, so the next thing now. Yes. So the next thing, so now Brorow says, okay, we let Nallet work in d equals two dimensions. And put m squared. So this is not quite Euclidean space, I've been slightly dishonest here, but we'll put m squared equals q squared equals, so m squared equals minus q squared equals one. And now, so now we set S, so using Broadhurst's notation so you can look at his paper and directly compare with his conjectures. The integral is omega g. So if you look at the number of loops and number of edges, what happens is the psi drops out of the integral. So we get psi g. And so this m here in his notation is this graph where m minus one is the number of internal edges. So I don't know why he shifted the indices, but I'm going to stick with his indexing to make it more consistent. So if you do stick q squared equals minus m squared, of course the m squared just factors out of the integral. So we can just set m squared equals to one. Yeah, m is not m. Thank you, yes, that's unfortunate. Yeah, of course the m counting number of edges is not the same. But m is, that m is one, so that's okay. So S bar four is what we've just computed. It's this integral, it's omega three. It's this projective integral alpha one plus alpha three, alpha two plus alpha three, very beautiful integral. And to work it out, it's a projective integral. So to work it out, we just work on some affine chart. So let's put alpha three equal to one, and it's just d alpha one d alpha two, alpha one plus alpha two, one plus alpha one, one plus alpha two. So it's a very nice, completely convergent little integral there. And here's a list of what's known about these integrals. Now S bar three is two pi over three squared three. S bar four, which is this one, is pi squared over four. S bar five is conjectually, I'm not sure if this has been proved actually since I looked this up, but it should be four pi square root of 15 times the L function of a modular form. So this is some Dirichlet series that's beloved to number theorists. S bar six is experimentally given by 48 Riemann's ETH of two times special value of the L function of a modular form, where F four is a weight form modular form. It's an explicit product of dedicated ETH of values. So this is something, I won't dwell on this too long, but it's something that number theorists care very much about. And it's not at all understood how and if at all this pattern should continue. Okay, and in general it's not known. After this, there's no conjecture for what these integrals should be. So you see that some very simple family of diagrams is producing some numbers relating to many active topics in number theory and we really do not understand how to make sense of this. Okay, so that's in some sense the worst possible family of examples. Now let me turn to a completely different family of single scale examples. So these will be residues in massless fight the fourth theory. And these are the examples that got a lot of us mathematicians interested. So now D is four again and all masses will be zero. So I'll say that a graph is in fight the four turns out to be equivalent to the fact that all the vertices of the graph of degree four. So here's an example. So what we have in this situation is that the number of edges is twice the loop number. So the superficial degree of divergence is zero and we're in the divergent case. So this quantity we had earlier has a pole here and we're in trouble. But what we're doing is we're looking at the coefficient of one over epsilon in the final integral in dimmer egg. And it turns out that this does not depend on the external momenta. It's just a number. Yeah, so that's exactly what I was going to say. It's the residue. It's called a residue and by total abuse of notation. I'm going to call it I again because I'm running out of letters. So I'll just call it I. It's not the actual formula to call it the residue. And it's just given by the integral of omega g of psi g squared. So in the previous case, the integrals were special in that the psi dropped out of the integrand. These are special because the psi drops out of the integrand. So these integrals don't always converge. A criterion due to Weinberg is that this integral converges if and only if g is what is called primitive. He didn't certainly use that language. Primitive refers to some hop-valje structure that came much later. The condition is that for all strict subgraphs of g, the superficial degree of divergence of the subgraph should be negative. And that means that the number of edges should be at least twice the number of loops. So the first example we saw already. So I leave you to work it out from the definitions that this is alpha 1 to alpha 1 over the graph polynomial squared. So these graphs should have 4 over 4 external edges, yes, of course. Yeah, it does. No, in general, alpha is the original degree. Yeah, yeah, yeah, it will always have 4 external edges, absolutely. It will always have 4 external edges. That's the consequence of this and this. My oil is forming up. Absolutely. So to work this out, we work on an affine chart, let's say alpha 1 equals 1. So that's alpha 2 goes from 0 to infinity d alpha 2 over 1 plus alpha 2 squared, and that is the number 1. So the residue of this graph is just 1. That's easy enough. And after that, they get a lot more difficult as we'll probably hear more about this afternoon. So let me just give the first few examples, some of the first few examples. So the next graph which satisfies all these conditions is this graph. So here the residue is 6 times zeta of 3. This gives 20 zeta of 5. This here gives 36 zeta of 3 squared, which is the square of this amplitude. So there's some identity looking, several identities looking behind the scenes. And finally, this graph with 6 loops, which took many years to first compute numerically, which is done by Bort-Hurston-Kreimer. This with those external legs here. But it can now be done symbolically, because this is now known exactly. And it is 27 over 5 zeta 5, 3 plus 45 over 4. So what are these numbers here? And they're given by a nested sum. So this converges when nr is greater than or equal to 2, and these are called multiple zeta values. So this graph, this 6 zeta 3 turns up in many different quantum field theories. Essentially for the reason that was on this board before, that you can always write a general amplitude by putting numerators in. And the sort of numbers, you only get a finite family of numbers coming out. And it's this same zeta 3 that shows up in quantum chromodynamics, n equals 4 Cp young mills. It shows up again and again. So there's nothing particularly specific. This theory seems very specific, but actually it gives a good indication of the quantities that are describing more general quantum field, or different quantum field theories. Okay, so multiple zeta values have a kind of weight, but now what I'm saying is totally abusive. You want to assign a weight to a number, you can't do that. It doesn't necessarily make any sense. And in fact, we don't know whether assigning this integer to these real numbers, MZVs, makes any sense at all. But conjecturally, these multiple zeta values are graded by the weight. That means that there are no relations, no algebraic relations with rational coefficients between them, between multiple zeta values of different weights. Okay, so I'll come back to that later. You should say that the weight is half of the weight of the unit. Yeah, I'll say that later as well. But so this is how physicists understand the notion of weight. And it's used a lot in modern as a sort of guide to various things in the physics literature. So it is known that there is an infinite class of graphs, of such graphs in phi 4 primitive, such that IG, the residue, is multiple zeta value, or linear combination of multiple zeta values. So there's a criterion. So there's a common at all criterion. I won't say much about that. There are eight loops. There are so-called modular graphs, whose amplitudes, whose residues we do not expect to be of this type. So somehow these numbers describe a vast class of these amplitudes. And something goes badly wrong at eight loops. So we can get some graphs whose residues are expected to be something else. I don't know what to call them, but for want of a better name, they should be multiple modular values, certain types of numbers relating to modular forms. They're absolutely not of this class. Another comment worth making is that not every multiple zeta value seems to occur. So that means if you take the vector space generated by the residues of these graphs, they don't seem to fill the space of multiple zeta values. There's some very specific subspace with some very interesting properties. And finally, there's only one family of graphs which we actually know the amplitudes. And these are the zigzag graphs. So you take a bunch of triangles like this and you connect them from head to toe. And so zigzag five in this case, the index counts the number of loops. So theorem I proved with Oliver Schnetz, which was a conjecture by Bordersen Kramer for a long time, gives a formula for this amplitude. So let me write it down because I think there's some interesting corollaries. So it depends on the parity of n. So this family of graphs gives every odd value of the Riemann zeta function. So we know in particular that all the odd zeta values definitely occur as amplitudes. On this you can deduce that the products of odd zeta values occur. But unfortunately, there is no other family of graphs whose amplitudes are known to all orders in this theory. And there is no family whose amplitude is even conjectured. So there's not much is known. And here you mean it's a vacuum diagram, it's not a residue. So from now on, i always means residue. So in this section, so I've been a bit sloppy here, in this section when I'm talking about residues, I'm calling the residue i, which is an unfortunate abusive notation. But it has no q and m in there, so I suppose. D equals 4, yes, D equals 4 in this here. You can have more in there, 5 to the 4. I'm being sloppy and they don't play any role because of this comment here. The coefficient of one of epsilon does not depend on the external momenta, so I can just ignore it. So this is for external legs, and this is closed graphs, this degree 4, which is 2 upon vectors. And probably 6 does depend on each. So there are two things you can do. You can do what you just said. Or you can also look at two loop functions like this in 5-4 theory. So here you put 0 coming in here, for example, and you have just a single momentum coming in. And the amplitude of this is some trivial dependence on q times the integral where you close up the external legs of a vacuum diagram. So these families of integrals are actually computing, if you take any integral like this, and graph and you break a leg, it's computing the amplitude of a graph like this. And it's also computing the residue of the corresponding 4-point function. So these graphs are encoding lots of integrals at the same time. So here's a conjecture that's an interesting challenge for a young person. It's in my paper with Schnetz. And so let G be a primitive graph of the type we're looking at in 5-4. So of course it satisfies n equals 2Hg in this section with Hg loops. Then we conjecture, in fact, that the amplitude is strictly less than the zigzag graph. So the zigzag is somehow the most, is the biggest possible Feynman amplitude in the theory. It's somehow reminiscent of volumes in hyperbolic geometry where you measure some complexity. These are pure numbers. So this is a positive number. So we conjecture that this graph is the biggest, and if this were true, it would imply a bound. It would imply that I of G over 4 to the H is at most asymptotically, by applying Sterling's formula to this, 2 over root 4 pi Hg to minus 3 over 2. So it gives you some bound on the size of Feynman amplitudes. And of course this is the first step in trying to understand whether you have the convergence properties or the resumability properties of your divergent series. And I think this conjecture is far better, far stronger than what can be proved at the moment. Of course you need to understand all the divergent graphs and the renormalization. But I thought that was a curious feature of this theory. And what it's tempted to try to show that doing elementary operations on the graphs somehow always increases the amplitude or something. There may be something to explore there. Sorry, but why would this bound be important then? Well if you want to understand, if you have the sum of all graphs, you have some divergent series, you need to know very precisely how many graphs there are and how big, how fast they're growing. I meant that usually this is removed by some renormalization, right? Yes, no. So these residues are contributions to the beta function which are independent of all renormalization schemes. So these numbers, no matter how you renormalize, they will always be in there. Of course there will be other graphs which have sub-divergences that need to be renormalized. And there I do not know of a good, I mean there's choices, but I don't know of an analog of this sort of thing. But if you guessed all graphs of this nature, it would tell you something about the radius of convergence. Okay, so some final remarks in the last five minutes. It should be rather open-ended. So I want to talk about this notion of weight that's been creeping into the picture. So the majority of the known amplitudes, which are, so for example, the polylogs and the MZVs, which is, forms the bulk of what we know about at the moment, they had a grading, a notion of weight. So this grading is highly conjectural in the case of multiple zeta values. And we can ask where on earth this weight comes from and if it's got anything to do with physics. So recall that a V, a vector space over Q is graded if and only if it admits the action of a group, where so lambda in Q star, lambda acting on V is lambda to the N, V, if and only if V of degree N. So we can think of this grading as a group action. And this suggests, among other things as well, suggests the possible action of a group on amplitudes. Which I probably doesn't make much sense. So here's an important comment. So there should be a weight as well, but the weight cannot be a grading. If there is a notion of weight on all amplitudes, it certainly can't be a grading. We should be careful of this. And it will be a filtration. It will be an increasing filtration. And if we stick with this, the normalization of the weights we've used up to now, which is the physicist's one, then we must expect half integer weights. So this is a fact that's well known to algebraic geometries, but I don't think it's been fully assimilated into the physics community. So what we do in mathematics, we tend to do is multiply all the weights by two, as Cati pointed out earlier. So what is a conjecture then? So conjecturally, this is a highly conjectural, there should exist a very large algebraic group of matrices. I should really say pro algebraic, so it's like an infinite, a group of infinite matrices. C, which acts on the space of generalized amplitudes, which is the space which certainly contains, let's call it F, F of Feynman or something. So this is the vector space over which field I won't say, of regularized, because these integrals are possibly divergent, but these are generalized, generalized integrals of the form of the general shape, some numerator, some a and b integers, and p a polynomial, which is homogeneous such that the whole integrand is homogeneous to degrees zero. So in some sense, this is like a vector space of all possible amplitudes you'd ever want to consider in any quantum field theory. Yeah, thank you, yes, absolutely, that's a very good point. So this is the space of all amplitudes, and there should be a weight filtration, not a grading. So this is the space of amplitudes of weight at most something. So C acts on F, and the action of C should preserve, preserved by the action of this group. There should be a huge group of symmetries acting on all possible amplitudes. So C would be called a cosmic Galois group. You can include any other integrals such as jump here, what's special about these guys? There's a very, very small subclass of integrals, and what was surprising? For any collection of integrals, you can do that. Absolutely, you can certainly do that. In all of the conjecture, it's kind of conjecture whatever, nor emotives. No, but if you have to replace, not actual numbers, it's a transcendence. Sure, sure, sure. Sure, this is... I agree, you can do this. The point, which I won't have time to mention, but the point is that amplitudes such as these you're finding are closed under the group. It's totally extraordinary. Yeah, absolutely extraordinary. The reason why the Galois conjugates of some crazy number like this should still be an amplitude. This seems to be the case. So in some sense, it's somehow the periods of some operad in the category of motives. It's a very strong condition. It's like the material fundamental group are the periods of a group in the category of motives. It has some very, very strong... So they're very special formally for the action. And here there's some sort of operad structure that's motivic. And so this is a useful concept. It really tells you something about amplitudes. So it says that the action of this group on an amplitude will be expressible in terms of smaller amplitudes in some controllable way. That means the product will tell you. Yeah, yeah, yeah. So as I've said, it's an empty statement if set up appropriately. But it will be a very powerful tool to study precisely this recursive structure. So this notion of cosmic Galois group is due to Pierre Cartier. And really I say this because they were the abstract of one of the talks that was mentioned of a co-product. So let me explain where this comes from. So we should have this group acting on the space of 5 min equals the weight n. So one theorem, for example, is that this space is finite dimensional. So this is extraordinarily strong fact you have infinitely many graphs, infinitely many possible integrands, but this vector space is finite dimensional. That's some magical feature of these integrals. If you pick a random family, that's just not true. So quickly, the dual of an action is a co-action. So if OC is the ring of functions, and I'll finish here on this group, then the group law is equivalent to a co-product on the functions. So I think this may come up again this afternoon. And this action here, star is equivalent to a co-action. So f goes to f times OC, and the game is to compute this co-action on graphs and to use that to get information about amplitudes. And so that's one way, in this challenge I mentioned at the beginning, of how do we think about all amplitudes and how do we put some structure in this, all this information. So the only way to do that is to use this action of a group or a co-action, as I think we'll hear more about this afternoon. So I'll run slightly, so I'll stop exactly there. Any questions? Do you have an example of the general action on a special class of just size? Yeah, so this doesn't make sense on numbers because you run into transcendence conjectures. We don't know that z to five could be rational, so we don't know that it makes sense. But indeed in this family, the conjecture due to Panzer and Schnitz is that this space is closed under this Galois action. So the group will send this amplitude to a linear combination of this and this. So it will map it, so this will generate a representation of the group which involves z to five and z to three, which are two smaller diagrams. So the fact that you're allowed to have z to five three is allowed to appear at this place and only in this place because you previously had a z to three and a z to five in the right places before. You're not allowed to have a z to three times z to two because that would give a z to two and there's no graph giving z to two. So it somehow rigidifies, you can see on these examples exactly how the group is acting or conjectural. I have a question about the non-comitative graph. Yes. You said that when the graph is non-comitative, the residue, the integral giving the residue is infinite? Yes, so you need to, yeah, it diverges and you have to renormalize it. So I didn't have time to explain. So we did an overall regularization for the overall divergence because the superficial degree of divergence was zero. So that's a regularization, but all the subgraphs which have superficial degree of divergence zero need to be regularizing some sense as well and that needs to be done consistently, so that's renormalization. So there's a way to do that. I didn't have time to explain that unfortunately. One more question? Alright. Thank you.
In this talk, aimed at master's and Ph.D students, I will explain how to assign integrals to certain graphs representing physical processes. After discussing the standard integral representations and their underlying geometry, I will give an overview of what is presently known and not known about Feynman integrals, and indicate why they are of interest to mathematicians.
10.5446/20225 (DOI)
Thank you everybody and welcome to my talk. It's about time to take your medication or how to write a friendly reminder bot. So I also work at Blue Yonder. So normally I'm more like a data scientist guy and yeah, the last Europe Python in Berlin I gave a talk about how you can extend scikit-learn with your own estimators or quite mathy stats talk and there were about 15 people thought okay. So this year I tried something different and well, okay, it's a little bit better at least. So surely the question that you all have is why would anyone write chat bot and there's a little story behind it. So a friend of mine was diagnosed with diabetes. So you surely all know diabetes. So you have to take insulin the whole time and he said that yeah. So there's two kinds of insulin. There's one you take before you eat something or while you eat something and then there's the long acting insulin. This is something you have to take at a specific time during the day. And he said that he's always like forgetting about this or taking it one hour too late or too early and of course he sets alarm on his smartphone but it would be really cool if someone would somehow remind him just like use a chat or call him or somehow remind him and then the idea was born okay why not write a little bot that uses Google talk or Facebook or any kind of chatting engine and reminds him to actually please take your long acting insulin now and to also wait for an answer that he really did it and otherwise like remind him again and all over. And I thought well that's also good for me. It's not only that I'm like helping a friend. It's also for me good because it's a good use case to actually start learning something new learning more about event driven asynchronous programming something where there's a real hype right now. I mean everyone's talking about async IO. There were a lot of talks about async IO. I don't know if you have seen the talk by Niklaus about the distributed hash table which was a I think a really great talk about async IO. So I took this as like yeah a use case I want to learn something about it also a little bit about XMPP and about how one can write a Google app because in the end I used Google Hangouts to actually implement it and of course yeah to help a friend. So the first thing I want to talk about is what is event driven programming and what is what is it has to do with asynchronous programming. So event driven programming this is the definition of Jessica McKellar from the twisted network programming essentials in an event driven program program flows is determined by external events it is characterized by an event loop and the use of call back to trigger action when events happen. So this is basically the definition and it just stated by any kind of event an event could be like a user triggers an action or a message is received over a network and then you have predefined actions the so-called callbacks that are then triggered and that actually do something and in most cases this is implemented as a kind of event loop as a main loop that listens for events then triggers the callbacks and callbacks are just continuation so it's just like what do I do if I get a message or if I have received something and event driven programs can be single threaded but also multi-threaded architecture exists so due to the the GIL in Python so async I is mostly is single threaded but you could also imagine that you had for each callback you could start a new you a new thread or something then you would have a more like multi-threaded event driven program but this is mostly not it's not what it's done in most implementation and to differentiate this a little bit from what is blocking and non-blocking you could think of an event driven program itself purely from the definition could be blocking or could be non-blocking that means for instance if you if you have a GUI and you click on the button then an event is triggered some action is gonna process and during that action your whole GUI could be blocking I mean this is sometimes things you see on Windows for instance that the whole GUI blocks and is unresponsive or while evaluating this callback could be non-blocking so that other things can happen at the same time or for instance another example would be some kind of 3D computer game I mean while the engine is waiting for user input it's still calculating the physics engine and is updating the graphics so what is asynchronous programming then asynchronous I owe or non-blocking I always a form of input output processing that permits other processing to continue before the transmission has finished so this is also a bit like like a form of cooperative multitasking because you you kind of at certain points in during your execution you kind of give away the the processor to other things and to let it yeah to you give it away so other tasks can use your can use the processor so but it's it's cooperative in contrast to preemptive in the sense that it's really at specific points and we will see that later and the concept can also be implemented as an event loop so since a picture says more than 1000 words so I'll have this graphic so if we have a program something really simple program that basically consists of three tasks and the tasks have different colors and if you see the gray boxes are actually like waiting times this is when the task reads in file or maybe waits for a network socket or message or an event by the user then it kind is it suspends its work it waits and the total time is then of the of the program is the sum of all those blocks so in multi-threaded you would like conically put each task in own thread and execute them in parallel so this is something you would not do like this in in python pattern you would more or less use processes because of the global interpreter lock but in this case it's important the point here is that you still have the gray blocks and the tasks are still waiting until yeah until some some I always ready for instance and in asynchronous there it is that you have only one thread again but during the gray blocks this actually is which to another tasks and something is done so here we start with the blue then during the waiting time here we start working on task number two and even on task number one since you're also ever testing by three because here we also have a waiting block and then it's in a concurrent way and in a way that can't be like really deterministic told so it could the colors could also change on the right side and I think this is this is an important point so concurrent event ring programming you don't know beforehand what the path the threat through your program is going to be so when to actually use asynchronous event ring programming if you have many tasks of course you should have at least two otherwise it makes no really sense if the tasks are largely independent so there's if there's no much intercommunication between two tasks you could imagine if two tasks are the whole time talking to each other then this could be difficult then of course when you're when your tasks wait for I owe when they normally would block and you think you could do during that time you could do other things and if your task share a lot of mutable data that would would have been to be synced if you do it in a multi-threaded or multi-processing way this is also like an indicator that you could think about using something asynchronous or something event driven and this is especially for network application and user interfaces say interfaces this is like a natural that you have you use asynchronous event driven programming so some examples I want to show to to better get the idea so easy easy tasks just fetch some URLs and print them and check the elapsed time so how would this look like in a single-threaded version so we have some some hosts just a list we set our timer and for east host we use the URL to request the host and then we print it we also do a little transformation make it a little bit more interesting we just make everything uppercase and print the first 20 characters and the host where we got the web page from and print the elapsed time so this would be like a normal thing a third program in Python how you would do it and of course what here is like the blocking so here is actually the great block that we've seen before in our diagram because we are waiting here for the request to be fulfilled and we wait for getting the request and also here we wait to like read the HTML so now let's do this in a multi-threaded way here it gets only a little bit more complicated so we also do just the request and the reading in in one function and now we generate some some threads so each task is just now a thread and it's listed in in the chops list and we start here with our timer so I skip the overhead of creating the tasks the threads then for each job we started we wait until they're joined so until they're finished and we see that there's there was a speed up so this is just more or less a direct translation of the single-threaded program in a multi-threaded program so how would this look like in an asynchronous style and I used here twisted which is right now only available for Python 2 but they're just working on a migration to Python 3 so here it gets a bit more complicated because we kind of have to separate the different parts of the program much more and we use now callbacks so we have here the capitalized functions okay we just use capitalized and strip it down to 20 characters and we print the lapse time and now here it's the interesting part so for East host in our host list we get the page and this is none of this is now a non-blocking operation so this will directly return and we'll get us a deferred result and a deferred result is is what in in Python async I always call the future so it's it's actually just a proxy for a future result and you can say what should happen if the result is retrieved and this is what you do with the add callback so you say if I have it if the if the deferred result ever becomes the real result then please print call the print capitalized function with it and this is yeah so we registered our callback so we and then we add everything to the list and then we say okay now for all lists if they are all if all the third results actually fired then we also want to fire the print elapsed time function and we call it with a task react the main function so this is now the event loop so to say so we say okay run this loop and run the the function and we see yeah that it that it work but what you can see from this example is it gets really complicated so adding the callbacks it's like you have to think okay I have deferred result and I add callbacks and if those fired I group them and add another one so it gets really complicated and this is also a statement that Gito van Rossohms supposedly one said so I find found it on the net this quote and the net never lies right so it requires superhuman discipline to write readable code in callbacks and if you don't believe me look at any piece of JavaScript code and so the question is how would this now look if you would for instance use a sync IO so in a sync IO you would use a coroutine so coroutine so how many of you know what a coroutine is so okay I would say most so it's just a function where you can like it's not a function coroutine is like a more general idea of a function but you can stop at that one point return and come back and here I use the AIO HGP which is based on a single and at that point we yield from the request and so we give away the execution here for other tasks to do something at the waiting time but what we get in return is the real response so if the function here so you yield from if the function after yield from returns of future you wait until the future gets like realized or if it's a coroutine until the next result is generated and then we yield from the response and read and we directly get the HTML and now for each and host in host we have a task list again we append this print page function we get the event loop and we run the task until they're done and we also see the speed up so this point here is that using async I owe and avoiding callbacks in this example it's much more readable and it much more looks like the single threaded version actually and yeah it's it avoids this complicated graphs of callbacks so I think this gets you like a good idea about asynchronous event-driven programming so now back to our actual tasks so how to build now a bot based on this so the ideas we notify a friend at 8 p.m. as agreed about taking his long acting insulin using Google talks or Google Hangout then we wait 20 minutes for reply then we ask again did you really take it and ask again after another time the during the whole time we check if there's a message which starts or is equal to yes if we get this message we praise him we say yeah well done good job and if the message is is any other to yes we we just say we just ignore it of course we could also send something like a I didn't get it or so yeah and after having three times times asked in total we want to send the give up message like okay now I'm giving up and ask again tomorrow so when when actually stating something like this it's for me at least it was helpful to come up first with the with the state graph because what we are actually doing with the abstraction is that we are changing between different states and there are also libraries for this in in python but before before I build it I just wanted to know okay so how many states do I actually have to make it yeah to make the solution simple as possible so when during the during the day we are in a in a no message send state so this is the initial state and what happens if we for instance get any kind of message so this is the event from my buddy from a friend at that time then we just do nothing so to avoid things that he for instance writes yes before I even have asked I mean this is a situation to consider and at so at the event at 8pm we send a message we are now in the message send state so if he replies then with a yes we send okay that's good send the praise and go back after another 20 minutes we ask again and I think you get the idea until that point and there is also if he then repeats with yes it's still okay but after one hour we send okay you've done not so good and we give up and this is how it can be translated in a program but which protocols are actually used so first I thought about or I started to implement it in XMPP so XMPP is the extensible message and presence protocol which is based on XML it or formerly it was known as Chubber and Google talk so G talk used it but it was deprecated quite lately so Google switched to to hang out which is proprietary protocol which is yeah not that good even Facebook they had an XMPP AP until just recently May 2015 where they where they stopped it so this is not so good but still G talk is is working and I also did the implementation for this and so if you want to ever do something with XMPP I would recommend the sleek XMPP library and you also get a lot of good documentation about it but since it was deprecated I switched to to Google Hangouts as I said the proprietary protocol and a really good library that was this kind of is this a reverse engineering of Google Hangouts is Hangouts by Tom Dreyer and yeah it's a quite active project at the moment and it also already provides a chat client interactive chat client and there's a lot of bots already based on on hang up so this is a really interesting project so the implementations the two are can be found on the Gluyonder website not the website but the Gluyonder group GitHub account and I want to just so just show some code examples so where it all starts is like the run of the bot class so it's really not that much code actually so you can later just look it up but so the interesting part is that you that's a client is now the hang up protocol client and you say what should happen on the connect you connect different you connect different functions to the events that are then called so this is the observer pattern and we have here the event loop we get the seconds to the next alarm so it's just the yeah the function that calculates from now to next 8pm today or tomorrow and says later I'm gonna call the function set alarm and then it's like calling the connect function of the of the client to do some more rest with the registering of callbacks so how does the set alarm function look like a little bit more interesting so here's the message and we get the conversation with the recipient so because I mean you could chat to too many people at the same time we yield from the message so this is also again a point where we give away the execution to do something others like waiting for an answer or for doing other kind of network transmissions we set here the state that we are now in the in the ask state we sleep and since we yield from async IO sleep here it's also we give away the execution to other tasks that can be ran concurrently and here it's just this three times asking and here is the giving up quest the giving up and in the end we just set the timer set it back to the next time and I think the code is like quite easily readable and it avoids it more or less avoids the registering a lot of callbacks on some kind of deferred results like you would have done it maybe with twisted and now handling a message so on event when the message is received we get from the message the converse conversation we get the user ID we get the text what was actually sent we check if the user ID is equal to the same to the recipient ID to the one we want to talk we check if the text is positive a positive answer and if everything is fulfilled we are in the right state it is positive and it's from from our body we just send the message that's great and go back in the in the faults in the ask equals false date so we go back to the initial state in our state diagram all right that much about the source code as I said you can just check it up online and maybe one more thing about authorization if you write a bottle like this you don't want to like provide your credentials and put it somewhere in the file so this is where the all authentication do standard comes into play and this allows that you as a resource owner give to a special give special rights to to your application so and it works by using user tokens so how does this work your application all it wants to do is to use your Google account to actually send some messages and it doesn't need to know all your contact contacts is just sending messages so it requests a token with a scope of sending messages then the you get a URL this is you can display to the user this is then basically you know all know this kind of screens where you then just click accept and you get the authorization code back and you can use this code to get a token and with this token your application can then without knowing your password of your Google account use the Google API to work yeah to to to send messages over hangups or yeah Google talk all right so just a picture to show how it looks like in the end and yeah it's just a easy easy use case but it worked in the end and he's quite happy with it thanks to Tom Dreyer so this was the guy from hangups who also helped me a bit and yeah all the pictures I took our creative comments and thank you for your attention and yeah so hi thanks for your talk I actually have two small questions like have you thought of implementing this on other communication supports like for example Telegram WhatsApp all this kind of stuff have you thought of other communication supports apart from hangouts like WhatsApp Facebook Telegram are you implementing it and the other question is I actually thought of making something like this having a chronic disease myself to force to just remember stuff but what I thought would be really interesting in this would be to have some kind of AI or machine learning to actually for having the computer knowing when I'm more keen to forget to take it or this kind of stuff so is it some kind of idea you have in the future because I would be really interested in participating then yeah I also thought about this because I mean now I now get the messages so when he took it and maybe over the weekend he if it's more easy to forget right I mean maybe you're in a party or whatever and it's not just not the right time and you're like you don't do anything and of course you could then run a lot of statistical analysis on it and I mean since I work as a data scientist of course if you would have a lot of information about this you could do some statistics you could even predict that on the weekend yeah but on the other end is always with private information right because yeah but I think this is a good idea and I think the basic ideas so you're to your first question could also be easily done on Facebook and so there are a lot of libraries if you start looking for it and there are also a lot of bot libraries that could help with finding the right answers or making maybe the conversation a lot more interesting so there are stuff like this I wanted to keep it simple as a first try but yeah now one could definitely build on it. More questions? Hi yeah that's really great have you thought of putting any kind of link back to maybe something physical like opening a packet or so that you both have proof as well as his yes command that he has taken his instrument. So say he had to open a packet that moved a switch and that was recorded that he had I'm more thinking about say people of Alzheimer's who need to be reminded to take tablets have you thought of expanding it in that kind of way? So far not really but so it's still. So I mean in the in physio world they they're really keen on making sure that people do actually take their tablets as opposed to just saying yes they've taken them so have you thought of expanding it so that you can have a switch that saw that someone had opened a pill box say. Ah okay now I understand yeah I mean then it gets of course more complicated you would need to have the device or what those insulin pumps for instance some already have some yeah technique in it and one if one could access that protocol and then you could maybe even yeah he could connect his smartphone to that device. I know that there's a lot going on for instance the measurement of the blood sugar is now just like three months ago there was a new device from Libre which is constantly like measuring your blood sugar you have it on your right arm and you can just read it out and on a second by second basis which is much better than doing it the conventional way and of course one could like trigger back so if you took it then you will also get from this sensor that the blood sugar goes down and so there's endless possibilities but this would then I would need to like I mean this was not a commercial idea or something was just for me a use case but I think it's definitely possible in a technical sense and one could follow up on that yeah definitely. More questions? Alright that was fascinating thank you very much again Brian. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Florian Wilhelm - "It's about time to take your medication!" or how to write a friendly reminder bot ;-) The author shows how to use the [SleekXMPP] library in order to write a small chatbot that connects to Google Hangouts and reminds you or someone else to take medication for instance. The secure and recommended OAuth2 protocol is used to authorize the bot application in the [Google Developers Console] in order to access the Google+ Hangouts API. The author will elaborate then on how to use an event- driven library to write a bot that sends scheduled messages, waits for a proper reply and repeats the question if need be. Thereby, a primer on event-driven architectures will be given.
10.5446/20223 (DOI)
I'm going to talk in English. So, yeah, I thought a lot. Our talk, thank you. Our talk will be IRIFAS. I'm going to talk in English. I'll talk with IRIFAS and share some advice, some experiences in how can you build the file system in Python. I'm Emmanuel and I'm a software engineer at Presslabs. I'm Vlad and I'm also software engineer at Presslabs. Before we get to the details of the file system, I would like to introduce our company a bit. So, we get a picture of what we do and the problems that we encounter. We are a Romanian-based startup and we do WordPress hosting dedicated for publishers. Our main goals are performance, reliability and humanity. And as you can see on the low part of the slide, we encounter some interesting numbers along the years. We had 45 million pageviews on a single site during the day. We also had 6 million pageviews on a single site in a single hour. In our busiest month, totally summed, we had 2.2 billion pageviews. And in the past 12 months, we only had.0006 outage, including the maintenance time. Okay. So, this is not more. And this neither. And we didn't even begin the demo, so. We apologize for this. This was not planned at all. Yeah, like we are on call both of us, so this is an emergency entry. Yeah, that's it. Problem solved. Yeah, this was not planned, but it turns out well. As you can imagine, the business is far from perfect. And one of the problems that we encountered along the years was the conflict between the publishers, namely the site owners and their developers. So, usually the workflow works like this. Someone has a website and developer. And the developer writes the code, everyone is happy. Until the publisher, namely the site owner, tries to change things. And they try to change it even though they don't have the technical know-how. So, this is it. We have chaos. They break the site. We don't know who changed what. The publisher starts blaming us. The developers start blaming their publishers and blaming us. So, yeah, we have a big pile of chaos. So, we thought really hard, how can we fix this problem? And after some thinking, we came up with GitFS. But what is GitFS, actually? GitFS is a self-versioning file system based on Git. And once you mount it, you can use it just like a normal file system. But behind the curtains, it will do automatically the versioning part. So, from a functional point of view, what it does, it takes this complicated tree structure, which is not really human readable, and it transforms it into this. As you can see, we have the root folder, which contains two main folders, current and history. In the current folder, we have the state of the repository at the latest moment. So, in the current folder, you will find the newest content. In the history folder, we have a folder for each commit. So, basically, we take each commit, we take the content from the Git objects, and we display it in a human-readable way. And in the current folder, you can write, you can change the content, you can view the content. And in the history folder, you can only see the content of the commits. So, yeah, this is it. Simple, right? Now, let's do a demo, and hopefully, it will work as planned. Okay. We have here the remote repository. I'm not using the network, hopefully, this time. And here we have the developer clones of the remote repository. And now we're going to mount this file system in mount point. It's very easy. You just pass the remote URL, pass the mount point, some parameters, like local repo path, and some timeouts. Okay. Now, in mount point, you're going to see that awesome structure with current and history. In current, you'll have the current state of the repository, which is just a file. And in history, you're going to see a very nice history of that repository group by each commits. Now, let's go to the developer side and write some text. Like, put 42 in readme. We commit them. And just push it. Now, if we go to the GitFS mount point in current and open that file, we're going to see the 42 content text. And in history, the last commit is now. And pretty much that's it. In history, it's a bit special. You cannot do any write operation. Because it's only, it's read only neither in the root directory. Okay. Thank you. As you can see, it's easy as one-to-three. It was built entirely in Python. It's open source. If you find this product interesting, we welcome you to contribute, change it, adapt it to your needs, and maybe we can grow it further from this point on. But how was it actually made? Well, since neither of us had previous experience in building file systems, we started with some research. We started with some requirements. And after analyzing these requirements, we defined two problems. First of all, how can we handle the Git objects in a very efficient manner, both time-wise and memory-wise? And second of all, how can we implement the file system operations? Again, very efficient. For solving the Git object management problem, we use PyGitU. PyGitU is a wrapper on top of libGitU. LibGitU being a library written in C, which handles the Git objects directly. So no command line, no time waste. Wasted. For implementing the operation, the file system operations, we use FusePy, which again is a wrapper on top of the FuseC library. And using FusePy, you'll see we have a very elegant way of implementing the file system operations. And now I'm going to let Vlad tell you more details about the intricacies of how GitFS works. Vlad? Thank you. Okay, to simplify a little bit, our job, we introduced a concept called views. Views basically is just a class that implements some C-scalls that do some specific logic. For example, for each directory, we added, we created a view for current directory, the current view for history, the history view, so on and so forth. Between the actual C-scall and those views, we introduced this as a router. And based on some regular expression, when the open, for example, C-scall is going to be passed to the router, that router is going to route the C-scall to the specific view and execute the proper logic. It's pretty obvious, Django does it, everybody does it. Now, if I'm going to open a file from nine months ago, for example, that I'm going to do an open C-scall, that open C-scall is going to be passed to the router. Router will decide that I need the commit view to do that open. And I will instantiate a new commit view, execute the open and return the file descriptor. This is our very easy and useful diagram view. We have a main view, called view, which inherited from logging, mixing and operations from views. That view is going to be inherited by read-only view and pass-through view. The read-only view is going to be inherited by history, commit and index view, because as you can see, the earlier cannot change the past. The current view is going to implement the pass-through view. Basically, the current directory is just a pass-through view with some additional magic for the right operation. As you can encounter in real life, if you are doing a lot of pushing, pulling, commits and stuff like that, you'll get a lot of conflicts eventually. And we did the same, we implemented in our system a simple push-pull mechanism. And in order to solve those conflicts, we choose to implement a strategy called, always accept mine, because for us it's one of the safest strategies. But in PyGit2, you don't have this option, so you need to implement it by hand your own strategy. Also, the strategy mechanism is pluggable if you want to implement or use another strategy, just specify that at the moment. Okay, let's simulate the conflict. We have a branch, let's call master, with the commit 1, 2 and 3. And on the remote, the developer pushes commit 4, 5 and 6. And our file system on local wrote commit 7 and 8. In order to always accept the local changes, what we need to do is to get all those 7 and 8 commits and push them after the commits 4, 5 and 6. First of all, we split those remote branching in merging remote and merging local branch. We easily can find after that that the 3 commit, the third commit is the last common commit. And after that, we can find the 7 and 8 are the local changes and local commits that needs to be appended to the merging remote. After that, we just append 7 and 8 to the merging remote and rename the merging remote branch to the local branch. And that's how we solve the conflicts. Now, we have a pretty stable file system. We have basic pull-push mechanism. We solved conflicts. But now, let's see how we can behave in the real world. For that, we need a really big repository. And we choose WordPress, which has like 70,000 commits. And to do a simple listing on the history view, it took 34 minutes, which is not fun. So, as you can imagine, after some profiling, we find our bottlenecks and we can cache everything. So, we implemented three layers of cache. The first layer on the bottom, we cache all the Git objects. When we mount the repository and mount point, we read all those Git objects and we store them in the cache, in the memory. After that, and also invalidate on each new commit the cache. After that, we saw that the router just created a lot of new views and he didn't reuse them. Each time you wanted to read a new file, he would just create a new view and do the same open and read operations. For that, we implemented a simple all real cache for all the views. And in the end, we implemented a Git ignore cache and for now, we don't support some module. We did that because each time you wanted to write a file, you needed to check if, okay, in that path where I want to write, is it Git ignore or Git module? No. So, basically, what we did, we just put all the Git ignore and Git module content in a big in memory and invalidate that cache on each new commit. After that, after we implement all those three layers of cache, we managed to do the actual history listing on the WordPress repository in three seconds. So, from 34 minutes to three seconds is a big improvement. Okay. Now, for the last part, we needed a smarter upstream synchronization mechanism. We just doing just pull, push and merge is not enough because, for example, you don't want if you have a big archive and you just unzip it, you don't want to have 1,000 commits for each file. You just want to have one commit saying, okay, I just wrote 300 or 1,000 files on the disk. In order to obtain such things, we had four more components, main components. We have first, we have the few threads which we don't have control on them. I don't know how many few threads, few will respond for me or stuff like that. Basically, those are the current history view and other views. We have a commit queue and the sync worker. We use the commit queue to communicate between the few threads and the sync worker. The sync worker is going to do all the sync in the merging, pushing stuff. And also, we have the fetch worker which is just going to fetch the certain timeout from the remote. The fetching worker has a special mode called idle mode. For example, if you don't have any activities on your file system for more than a timeout, let's say a day, then it's go to that, it's enter that idle mode. And in idle mode, the time between fetching increases. So, for example, if you don't have any activities on your repository or on your file system for more than one day, it's going to fetch only once per week or once per month or so on. We do that to save some resources. Okay. Now, if we have, if our few threads are done writing some files, after that, some commit jobs are going to be put to the commit queue and those jobs are going to be consumed by the sync worker. The sync worker is going to batch those jobs and create only one commit. And as soon as the commit is created, he wants to push them to the upstream. In order to do that, first we need to merge those commits. In order to merge, we need a clean staging area. To get a clean staging area, we have to lock all the writes and wait until all the writes from the few threads are done. We notify the few threads, okay, now we need to merge and push, so please don't do any write operation. And also the fetch worker, okay, please stop. I'm okay. I'm going to sync the changes. After the sync process is done and all the changes are up to the upstream, the sync worker is just notifying the few threads and the fetch worker that is okay. You can now resume your work. The concurrency everywhere. Now for the final remark, we let Manu to say some final words. Thank you, Manu. If you want to use GitFS, you can simply install it. We have created an Ubuntu package and some folks from the community also created one for Fedora and Arch. You also have one for OSX, so if you're a Mac user, you can use GitFS. Okay, and now we want to leave you with some takeaways that we hope will be beneficial for you. First and foremost, you can actually create a file system in Python and use it. You can see we did it. We have been using it for almost a year now. And we had no problems related to the technologies we used. Lots of folks said, okay, you should write it in C or something more fast. But we did it and as you saw, it works great. Writing a Fuse file system at first is pretty straightforward. You have to implement some operations and you're done. But to get the data model right and the operations associated with that model sometimes can be tricky. Again, we had some problems with concurrency. This is the actual model that Vlad spoke about. As you can imagine, it was not the first one that we came up with. And we had lots of problems and we did a lot of refinements to get here. So this is a word of caution if sometime in the future you plan to write a file system, you would think really hard about the model. And last but not least, we enjoy working with new shiny tools, programming language. After all, this is a conference about programming language. But sometimes it's good to not forget that our main purpose is making people's life easier. And we should sometimes focus on creating tools that allow non-technical people to get access to powerful systems. So someone who is not technical could use Git, for example. That's, in our opinion, something pretty awesome. You can find the project here. And we are expecting you. If you think this project is interesting, you are eager to get more contributors. And as we said, to grow it further, it has a lot of use cases that are not yet implemented but could be. Now, if you have any questions, doubts, please ask. Thank you. Hello. Can you explain how this helped you solve the first problem that you described with JavaScript? How is it put to the real world? I will answer. We have basically our clients use SFTP. And they are pretty much familiar with SFTP. So we just mounted this file system on a SFTP server. And they can use SFTP. But instead, in the background, they are using Git file system. This Git file system. And their developers can now use Git. Because usually the developers know how to use Git. But the problem is with the publishers. So in case there is a JavaScript error, you go back to a previous version? What do you do? Yeah, you can do that. But it's not automatically you need to go and do a copy from the history, from the last checkpoint of the repository or the last commit. And then copy the entire directory there. So I have a question. Great work. Congratulations. So, yeah, I'm here. So there is a way to limit the number of revisions in the history of in Git FS. So that you cannot, I mean, if you are doing a lot of updates, your storage keeps on certain limits and not grow for too long. And right now, no. But you can do some tricks here. For example, you can specify that sync timeout a little bit like to have to do the sync. Basically, the sync timeout somehow is related to how often I do the commit. And for example, I can batch an entire hour of changes in only one commit. And you can limit that way. But you don't have a hard limit say, okay, you can do only 1000 revisions or something like that. Hello. Thanks. Feels like a pretty neat tool. I have actually two short questions. First one, like you shown the Git rebase thingy was, was it like Git rebase? I mean, was it like four, five, six and seven, eight commits? Yeah, the merging part? Yeah. So my question is if you can reuse some parts of Git actually, or you had to implement it from scratch? We don't use any Git, we don't use the Git command line tool. So basically, we just did it by hand and we commit, we merge here, for example, when we paste the seven and eight commit, we needed to merge manually each commit. And the second short one, like do you profit from a tree structure file system? Because it's basically tree. Thanks. Yeah. Yeah. Can you repeat a little bit of the question because I didn't know. Do you profit from a natural tree structure from a file system itself? Not that much right now. Thanks for your talk. You were saying you are caching a lot of stuff from the Git repository. So I was wondering, is your memory consumption going up when your Git repository gets really big? Yeah, but it's not linear or expansionally because, for example, for WordPress repository, it took only 200 megabytes, I think, 200 or 300, for a very, very, very big repository. Usually in production, we have only like 60 megabytes per repository. So it's pretty, for us, it's pretty low, but yeah, it can get a little bit higher. Even though LibGit 2 is really, really efficient in that way. And you can tweak a little bit the cache. So for example, the views cache you can tweak and say, you need to stop at that memory size. Yeah. How do you do a specific reward? How do you find that? Yeah, for now, you don't have, it's pretty hard to model that in a file system because, for example, you need a special file. And when you open that file to do a revert, or when I'm writing, you can get like a meta file with meta data and say, okay, please revert to that commit. But for now, you only can do that manually, going to the history commit, and just move all the commits, all the files, or just the file that you are interested in. But that's a pretty cool feature. Thanks. Do you have a problem with big binary data like images, maybe? Yeah, we don't actually support that. And we have a limit on how much you can write. And this is all tweakable from the options. And second question about, do you know about GitHub trick for the big files, like, GitHub tries to move its big files from the Git system to another file system? Yeah. And keep a link. Do you get the fast to keep these links too? Yeah, for now, no. And I don't think we are going to implement that. What is a good question, and we can debate on that, because that's a lot of implication to do that. Okay, thank you for going to talk. Thank you. Instead of using the repository as a backing for file system, is it possible to use the file system as a view on an already existing repository? So it might just give you a nice way of using a standard file browser and tools to just look through the history of a Git repository you already got. Yeah, for now, no, because what it does is just clone that repository, but that's a nice idea. That's a nice idea. Usually don't work this way, but you can do that. For now, I know that you cannot push on the local repository. It needs to be a bad repository to do the actual push, but maybe we can change it a little bit. It can be just a view of your repository. Thank you. No more questions. Okay, well, thank you. Thank you. Thank you.
Danci Emanuel & Vlad Temian - gitfs - building a filesystem in Python gitfs is an open-source filesystem which was designed to bring the full powers of Git to everyone, no matter how little they know about versioning. A user can mount any repository and all the his changes will be automatically converted into commits. gitfs will also expose the history of the branch you’re currently working on by simulating snapshots of every commit. gitfs is useful in places where you want to keep track of all your files, but at the same time you don’t have the possibility of organizing everything into commits yourself. A FUSE filesystem for git repositories, with local cache. In this talk we will take a look at some of the crucial aspects involved in building a reliable FUSE filesystem, the steps that we took in building gitfs, especially in handling the git objects, what testing methods we have used for it and also we will share the most important lessons learned while building it. The prerequisites for this talk are: A good understanding of how Git works Basic understaning of Operating Systems concepts
10.5446/20217 (DOI)
their test. So thank you very much for coming. Can everybody hear me? Okay, thank you. A couple of slides of presentation. Indeed, I have not my badge, so please let me introduce myself. I am Valerio, but that's me, of course. Indeed, after me. I have PhD in Computational Science and I'm currently a postdoc researcher at the University of Salerno in Italy. I can define myself a data scientist, whatever it means. And of course, I'm also a very geeky person, so you're going to like all the stuff. And please don't ask me to fix your computer, but I'm quite sure that you'll never ask me that. Yeah, let's get serious. So these are some of the topics I work, usually work on. I work with a machine learning algorithm from machine retrieval, text mining, and general. And I recently joined the team in Salerno, working in and with linker data and semantic web technologies, very interesting. And I usually apply all this stuff to the software. So in fact, my main research field is software maintenance. And so I basically apply machine learning algorithms to the force code and the analysis of force code. And of course, I do all this stuff with Python, so I prefer the programming language. And these are more or less all the tools I use basically every day. In particular, the machine learning tools I use the most and I like also the most are this one. And these are all the tools I'm going to talk in a few minutes. So let's get to the point. Machine learning and the test. So what? The presentation is more or less organized into different parts. The first one we're going to understand what should become and risks and pitfalls related to machine learning and machine learning models or at least I'll try to introduce some of the topics you should see about this kind of things. And in the second part, I'm going to talk about testing machine learning code, what actually it means and what tools I'm required to use. So please, before we start, please let me ask you three questions. First of all, do you already know machine learning? How many of you? Fantastic. So you're all perfectly suited for the start. And do you already know use here about testing or test ribbon development? Yeah. And have you ever used like it? Learn from machine learning. Okay. Perfect. So I'm trying to skip all the introductory part. So basically what machine learning is. This is one of the most common definition of machine learning. That says that machine learning is the systematic study of algorithms and systems that improve the knowledge and performance with experience. I took this definition because this definition points out a very interesting part of the machine learning, the algorithmic part. So at Glens, so basically machine learning means writing algorithms and writing code at Glens, machine learning should look like this. So basically, it's algorithms, data and statistical so in a few words, in a national machine learning should be summarized as algorithms that should run on data. And from our point of view, I mean from the point of view of this talk, we should deal with algorithms that we should deal with the testing of algorithms that analyze data. So we need to take some these into consideration to to perform our testing properly. Very common and few examples of machine learning. These is an example of linear regression. In this case, we have all the data, the blue dots are the data and we want to generalize a function that feeds all the data. Another very common problem is the classification problem. We have the data divided into classes and we want an algorithm to divide the data properly the data in to do in the two classes we have. In this case, we have defined a hyperplane separation between the two classes. And another well known algorithm, another well known technique, machine learning is clustering. The clustering problem tells that we have different data distributed in the space, the blue dots and we want to end up with an organization of the data like this, for instance. So we want to want an algorithm that is able to identify the different groups in the data. First two examples presented of as many of you already know, are an example of supervised learning. Supervised learning means that the pipeline processing of a machine learning is more or less like this. We have data over there and we transform the data into a feature vectors. Then this feature vectors is fed into the machine learning algorithm we want to do. We have defined we want to test and then we have labels. This is the supervision part. That's why this kind of methods are called supervised learning. And after that, after we train the model, we want to exercise the model on the new data. So basically machine learning means try to define a model that is able to generalize the conclusion. And that's the key word. The key word is generalization. This is the supervised learning setting. The unsupervised learning setting is something like this. So it's almost the same stuff. The difference is in the output, of course, and in the fact that the supervision is missing. So no labels on the data are provided. So please let me just get back to the previous slides. In the output of the supervised learning model is the expected label. So we have an algorithm here that is trained on a set of labels on a set of key labels. So set of labeled data. And we expect the algorithm to generate the exact label or the proper label for the new data coming after the training part. This is the supervised learning setting. In the unsupervised learning settings, since the labels are missing, the output is different. So in general, we may have a likelihood or a cluster ID, which the cluster or the group where the data belongs to. Okay. So these are the general introduction of the techniques and the stuff we are supposed to deal with. And second learn provides this kind of cheat sheet. It's a sort of mind map that you can use to decide which kind of technique you can use for your specific problem. And this is quite interesting because as you can see, second learn provides algorithms for classification problems, for clustering problems, for regression problems. So the three examples presented previously. And we have also dimensionality reduction is another problem of unsupervised learning. Here you may find, even if I don't know if you can read it, but here you may find some tips on how you can decide to which technique you should use for your specific problem. First things, if you have labels, of course, you may, if you have labels, you may end up with regression or classification. So supervised learning approach. In case you don't have labeled data here, you hand up to clustering approaches because you don't have supervision. And this is just a very simple tip. But even if you decide which kind of approach you want to use, I mean regression or classification or clustering, whatever it is, you are, you need to decide which kind of technique you may use because classification is a family of approaches. So the classification itself is a kind of, is an approach, a family of techniques. So you may decide which kind of algorithm technique you should use. And after that, so basically you have to decide which model you're going to use. And after that, you have to decide also the set of parameters that your model best, that should use for best approximate your result. So we have a lot of things to decide. So basically, another definition of martial learning is martial learning teaches machines how to carry out tasks by themselves. It's that simple. And this indeed, the complexity comes with details. And in this talk, we're going to deal with this kind of details and try to see how we can deal with all the details we're asked to deal with. So this is our starting point. So we have the data, the historical data we want to use. We have decided which kind of model we want to use for our problem. And we end up with this pipeline process, this kind of iterative process, because we want to test if the model we have built, so the order model we have decided to use is perfect for the problem at the end. And we want to evaluate the performance of the model. And we want to optimize the model. So in this case, it means try to tune the parameters of the model in order to improve the performances. So we want to deal with this iterative process in this talk. And what about the risk? The risk related to martial learning? We may end up, first of all, we may end up dealing and analyzing unstable data. So we need to be robust against data that may contain noise on one hand. On the other hand, as I said before, martial learning is essentially algorithms. So we need to test if the code we already written contains fault or programming fault. We may end up with a problem which is called underfitting. The underfitting problem means that the learning function we decide to use, and sometimes this means that the set of parameters we decided to set to our model is not properly suited for our data. So the learning function does not take into account enough information. So the model is not accurate enough to learn from our data. This is called underfitting. Another problem is the overfitting. So the counter example, so the completely different problem, we have taken the learning function does not generalize enough. So this is a quite difficult phenomenon to discover. And we will see that there are some techniques to deal with this kind of problem. And finally, we have the unpredictable future. So we don't actually know if our model is working or not working. So we need to check and test the performance of our model while it is running. So I'll take a cope with this kind of risks. First of all, if we have, we may end up, we want to reduce the problem of unstable data, we have testing. So we're required to do some testing to our code. If you want to avoid underfitting or overfitting problems, we have a technique which is called cross validation. We will see some examples about that. And the unpredictable future, precision or recall tracking over time. Do you know what is precision and recall? Okay, I'll try to explain a bit. No problem. Okay, so let's start with the dealing with unstable data. So basically, the point is try to test your code. And testing your code is one of the things that I suggest you to do most of the time. Thank you. In Python, we have a lot of tools for testing. We have the great unit test module. And basically, unit test is based on a second assertion. And the assertion, for instance, we have assert equal A and B that test if the instance of A is equal to instance of the object B. We have a lot of assertion. The last column here in the figure refers to the Python version where it has been introduced in the unit test. Let me just briefly remind you that unit test module is a bit more extended, improved, announced in some terms in Python 3 with respective Python 2. And I will show you an example of that in a couple of slides. Moreover, we have assertion to test the exceptions. We have the assertion to test warnings or even assertion to test logs. And this is an example of how you can use the asset logs. So basically, you test if the output of the log here corresponds to what you expect. But in case of machine learning, we need to take into consideration that basically we're dealing with numbers. In fact, one of the most important features in Cycuit is that data are presented through matrices. So in general, we end up with having the feature matrix as X represented as a matrix of numbers. And we have labels that are basically arrays of numbers. So here we have to deal with numbers. So the testing we're going to write, the unit test we want to write, has to deal with number problems. And we need to test numbers. And we need to compare arrays or floating point numbers. In this particular case, we have NumPy comes in help. NumPy, I don't know if you already knew that, but NumPy has a testing module that includes some more additional assertion. For instance, on asset almost equal, approximately equal, and some assertion related to array comparison. We will see a couple of examples. For instance, if you want to assert that two numbers are almost equal, we might use the assert almost equal assertion in the NumPy testing module. And we might specify the number of decimal positions we want to, the two numbers are compared. So in the first case, we want to test the number at the seven decimal places. So in this case, the test passes. In the second case, since the last digit is different, so here the decimal places to take into consideration are eight, the test fails. So we have an assertion error here that says that arrays are not almost equal to the eight decimals. So actual and the desired. So these are reported. And this is one of the things we need to take into account when we deal with floating point numbers. Moreover, we may assert if two arrays are equal, NumPy provides two different functions, assert all close and assert array equal here. The assert all close function implements this comparison, this function. Basically, if we test assert all close takes to some more additional parameters here. Eight all, which means absolutely absolute tolerance. Our toll, which is the relative tolerance. And in this case, the test will pass in the state. If we're going to use the assert array equal, these two arrays are different. And this is the assertion error we have. So the mismatches are 50%. Again, if we want to compare floating point numbers, we might take into account the so called ULP, so the unit list precision, which is the usually referred to the epsilon. If we want to know what is the epsilon for NumPy and for floating point numbers in general, we may get this by using np.finfo.apps. This uses the epsilon, so the ULP for floating points. And in this case, if you want to test if two arrays are equal, in the first case, the test passes because we're going to verify with you to check if two numbers plus the epsilon are equal. And this test passes because we're just adding one single epsilon. And so due to floating point numbers representation, this test passes. In the second case, the test fails because we're adding a quantity which is greater than the epsilon, so greater than the unit list precision. And so the two numbers are considered different. X and Y are not equal to one, ULP max is two. And finally, NumPy testing is great because it's also had some more tools to deal with your testing. First is it has some decorators that integrates with NOS, the NOS testing framework. Just an example, it has these decorators, the one showed in the slide is slow that allows you to decorate the function telling the framework that that test is supposed to run slowly. What it means, it depends on your personal, your definition. Again, we have Mojave, we have the mock framework which is included in the unit test of Python 3. And this is one of the features I was referring to when I said that the unit test module in Python, the built-in unit test module in Python 4, Python 3 is a bit extended and answered with respect to the one in Python 2. In Python 3, you may do something like this from unit test input mock and this works. In Python 2, if you try to import mock from unit test, you got an error and if you want to use the mock in Python 2, you should do a peep install mock which is a mock package available on PyPy. We see an example, do you know what a mock is? Okay, so no problem. Basically here we define a class which is a nuclear reactor that basically calls a function which is the factorial. The factorial here prints the message and this message is just used to test if the actual code is exercise.not by the mock and calculates the factorial number of the n, given an input. So this is the test. In the first test, we mock the factorial function and in the second test, we don't. So this is the output. So here we have mocked the output. So we want to test the assertion here, sorry. And we got working which is the actual code we exorcised. This case here, the do work. We assert that the output of the mock is 6 but we have already defined it and no more message has been printed. So no mock has been printed here because the actual code has not been exorcised by the mock. It's just been mocked. And in the second case, we got an assertion error because we have here a factorial of 3 which is not supposed to rise any exception so we have an assertion error here. So here we are exorcising the real code, here we don't. Okay, is it clear? Okay, thank you. So this is the part related to the... This is the part related to the unstable data. What about the modular generalisation and overfitting? I don't have the time to explain the code. I'll just show you the example. Basically the two, the most important parts of this code are these ones because basically we randomly generated some data here in this example and we're trying to apply different algorithms, in this case linear regression algorithm on this data, using different features and the different polynomial feature in particular. And the different polynomial features have been generated by the polynomial feature here in the scikit package and the different features have different degrees. So we try to apply features of degree 1, 4 and 15 and try to test what are the performance of this model. So this is the output. So basically the blue dots are the data and you may see in green the true function and in blue the function approximated by the model. In the first case we have a model which is underfitted in this case. The model defined, so a linear model here, a linear model with linear features is not taking into account enough information. In this case we have a very good model so it's perfectly suited for our test data and in this case the model is overfitting because it's trying to, it's trying to, to this case, this is not very good approximation. So if we look at this particular case, it seems that if we define a model with a polynomial feature of degree 4 for this particular data, we are done. So we have the perfect model we may have. Indeed this is not because this particular problem here, this particular model sorry, has been exercised only on training data and the problem is this model is in some sense overfitted. What does it mean? This means that if we consider just the training data, we perfectly fit all the training data but the model does not generalise any, does not generalise in any sense because if we, if the model is going to, not really, I'm going to conclude here. If the model will see some new data, the model has been too much trained on the training data so no generalisation is allowed on this model. So how we can cope with this kind of problem? The one extremely important part in the model evaluation is to apply a technique which is called cross-validation and in this particular case the psychic package helps us to, with a lot of built-in function that allows us to apply cross-validation and model evaluation techniques. In this particular case we want, we apply a very simple cross-validation which is called train and test split. So basically we get the input data and we split the data into different sets. So we have the training sets and the validation set and we, we train the model on the training data and we evaluate the prediction performance of the model on the, the validation data. One kind of technique to, to see the property, so the prediction property, the prediction performance of the model is the so-called confusion matrix. In this case this is a classification problem, three classes by three classes, a multi-class problem and we see that in this case we have three missed classes in this classification problem. Another more, another more complicated example is, unfortunately I don't have the time to, to show you is the, this is an example of the K-neighbors classifier applied on some data. So this, okay, let me just do, you can include with this. This is very interesting because we want to, to test the, yeah, okay, thank you. We want to test the performance on the training data and on the cross-validated data. So we apply here the function which is called shuffle split. So basically we get the samples. We have 150 samples. We, these are the, the aforementioned function to generate the true function, so the x and y data. So as the regression problem explained before. And then we want to compare the learning curve. So basically the learning curve is the performance of the training score with respect to the cross-validation score. And this is the cross-validation score for the degree for polynomial. So basically here we see that when we enlarge the number of training examples we consider, the errors between the training and the cross-validation score is basically reduced to zero. So it's a very huge model. And in case of a polynomial of degree one, so which was the model of underfitting, here we have the error between the cross-validation curve and the training curve that basically is even, is always large. So it's not a good model. Okay. Finally, some conclusions. I have more slides but I have no time to show you. I'm sorry. So basically in conclusion there's very important advice. It's always important to have testing in your code, especially if you want to test numerical data and numerical algorithms. Another suggestion I may give you just a hint, some reference to look into is something which is called FUDS testing. FUDS testing is very interesting for numerical analysis because FUDS testing maybe basically generates randomly applied data. So the FUDS testing technique is usually used to test the robustness of your code. So in just to test the performance of your algorithms in case of randomly generated data. Okay. So thank you a lot for your kind attention.
Valerio Maggio - Machine Learning Under Test One point usually underestimated or omitted when dealing with machine learning algorithms is how to write *good quality* code. The obvious way to face this issue is to apply automated testing, which aims at implementing (likely) less-buggy and higher quality code. However, testing machine learning code introduces additional concerns that has to be considered. On the one hand, some constraints are imposed by the domain, and the risks intrinsically related to machine learning methods, such as handling unstable data, or avoid under/overfitting. On the other hand, testing scientific code requires additional testing tools (e.g., `numpy.testing`), specifically suited to handle numerical data. In this talk, some of the most famous machine learning techniques will be discudded and analysed from the `testing` point of view, emphasizing that testing would also allow for a better understanding of how the whole learning model works under the hood. The talk is intended for an *intermediate* audience. The content of the talk is intended to be mostly practical, and code oriented. Thus a good proficiency with the Python language is **required**. Conversely, **no prior knowledge** about testing nor Machine Learning algorithms is necessary to attend this talk.
10.5446/20215 (DOI)
Thank you for the kind introduction. Hello, I'm Holger Peters from Llyander and I'm presenting to you today how to use scikit learns good interfaces for writing maintainable and testable machine learning code. So this talk will not really focus on the best model development or the best algorithm, it will just show you a way how to structure your code in a way that you can test it and that you can use it in a reliable way in production. For some of you who might not know scikit learn, scikit learn is probably the most well known machine learning package for Python and it's really a great package, it has all batteries included and this is its interface. All right, the problem in general that I'm talking about is that of supervised machine learning in this talk and just imagine a problem. We have on the left side here on the table, we have a table with data, it's a season, that's spring, summer, fall and winter, we have a binary variable encoding whether we have a day that is a holiday or not, each row is a data point and each column is what we call a feature. On the right hand side we have some variable that we'll call a target, it is closely associated to the features and the target is a variable that we would like to predict from our features. Features are known data, targets are data that we want to estimate from a given table on the left. In order to do this, we actually have one data set where we have features and targets matching features and target data and we can use this to train a model and then have a model that predicts. So the interface is as follows, we have a class that represents a machine learning algorithm, it has a method fit that gets features named x and a target array called y and that trains the model, so the model learns about the correlations between features and targets and then we have a method predict that can be called upon the trained estimator and that gives us an estimate y for the given model and the given features x. This is the basic problem of machine learning, there are algorithms to solve this and I'm not going to talk about these algorithms. I rather would like to focus on how to prepare data, the feature data x in this talk and how to make it in a way that is both testable, reliable and readable to software developers and data scientists. If you, I'm sure you want to see how this looks like in a short code snippet and this is actually quite succinct. So in this example here, we generate some data sets, x train, x test and y train, y test, then we create a support vector regressor that is some algorithm that I take off the shelf and so I can learn. We fit the training data and we predict on the test data set and in the end we can obtain a metric, we can test how well is our prediction based on our input features x test. So this is a trained model in scikit-learn, it's very simple, very easy and the big question now is how do we obtain or how can we best prepare input data for our estimator because that table that I showed you might come from an SQL database or from other inputs. It sometimes has to be prepared for the model so we get a good prediction and you can think of this preparation in a way, it's a bit like some preparation as it's done in a factory so there are certain steps that are executed to prepare this data and you have to cut pieces into the right shape so that the algorithm can work with them. One typical preparation that we have for a lot of machine learning algorithms is that of a normally distributed scaling so what we imagine that your data has very high numbers and very low numbers but the algorithm really would like to have values that are nicely distributed around zero with the standard deviation of one. Such a scaling can be easily phrased in Python code so x is an array and we just take the mean over all columns and subtract it from our array x so we subtract the mean of each column from each column and then we calculate based on this we calculate the standard deviation and divide by the standard deviation so now each column should be distributed around a mean of zero now with a standard deviation of approximately one and I prepared a small sample for this so you can see above an input array x and it has two columns. First focus on the right most column that would be a feature variable with values 32, 18, and 31 of course in reality we would have future arrays but for the example a very small one is sufficient and then we apply our scaling and in the end that column now has values that are based around zero and are very close to zero. Now I put in another problem that we have in data processing we have a missing value so just imagine I showed you in the first slide I showed you an example where we have weather data just imagine that the thermometer that measured the temperature was broken on a day so you don't have a value here but you would like that your estimation you would still like an estimation for that day and in such cases we have ways how to fill this data with values and strategies so one strategy is just to replace this not a number value with the mean of this feature variable so you could take the mean of temperatures of historic data to replace such a missing temperature slot and because if you apply our algorithm with the mean and dividing by standard deviation what you'll get is just a yeah in this example you'll get a data error from our code because another number values will just break the mean so I've prepared a bit some code that is does a bit more than our code before so before we just subtracted the mean and divided by the standard deviation and now we would like to replace not a number values by the mean and the reason our code failed before was that taking the mean of a column that contains not a number values numerically just raises not a number so here I replaced our numpy mean function by the function numpy non mean which will yield even with non values in our array x it will yield a proper value for the mean then we can subtract again as we did before the mean and divide by the standard deviation and in the end we'll execute a function numpy non to num which will replace all not a number values by zero and in our rescaled data zero is the mean of the data so we have replaced not a number values by the mean and so how does this new code transform our data and it actually seems to work pretty well the same data example with a new code as a resulting array where both columns are distributed around zero with a small standard deviation and so this is an example of some data processing that you would apply maybe to your data before you feed it into the estimator and yeah this small example actually has a few properties that are very interesting so I said that we if we go back to our example we actually transform our array x and take the standard deviations of all columns and the mean of all columns before we call estimator dot predict but what about the next call when we call estimator dot predict there's also an array x that is feed fed into and we have to process data that goes into this predict accordingly as we have transformed the data that goes into fit why is this because our estimator is has learned about the shapes and correlations of the data that we gave it in fit so the data has to look has to have the same distributions the same shape as the data that it saw during fit and how can we do this how can we make sure that the data has been transformed in the same way and so I could learn has a concept for this and that's the transformer concept a transformer is an object that has this notion of a fit and a transform step so we can fit data fit and as transform we can train it during with a method fit and we can transform it with a method transform and there's a shortcut define as I can learn fit transform that is both at the same time what's important about this is transform returns a modified version of our feature matrix x given a matrix x and during the fit it has to it can also see a y and so now we can actually rephrase our code that did the scaling and not number replacement in terms of such a transformer so I called this I wrote a little class it's called a not number guessing scalar so because it guess us now replacement values for numbers and it scales the data and I implemented a method fit that has this the mean calculation as you can see and it saves the means and the standard deviations of the columns as attributes of the object self and then it has a method transform and transform does the actual transformation it subtracts the mean and it divides by the standard deviation and it replaces not number values by zeros which are zero is the mean of our transformed data and using this pattern we can fit our not number guessing transformer with our training data and then transform the data that we actually would like to use for predict we can transform it in the very same way and another opportunity here is since we have a nicely defined interface and for this we can actually start testing it and I wrote us little tests for our class I think you remember our example array and I create a not number guessing scalar I invoke fit transform to obtain a transformed matrix and then I start testing assumptions that I have about the outcome of this transformation and now the issue that this test actually this this test finds an issue our implementation was wrong because if I calculate the standard deviation for each column and I expect the standard deviation for each column to be one I realize that the standard deviation is not one and that has a very simple reason if we look back at the code I calculate the standard deviation of the input sample before I replace not a number values with zero with the mean so in this example the standard deviation of the input sample is wider than the actual distribution of the data after replacing not a number values with with the mean and because the mean is in the center so and we map not a number values also to the center of the data and that makes the distribution kind of smaller and so in a way if we want to fix this code we have to we have to think about this transform method and the solution is actually to make two transformation steps at first we want to have one transformation step that replaces not number values with the mean and then we want to have a second transformation step that does the actual scaling of the data so we want two transformations and scikit-learn has a nice way to do this it offers ways to compose several transformers several transformations in this case we use a building block and I apologize for the low contrast we use building blocks that are called pipelines a pipeline and a pipeline is a sequential is like a chain of transformers and so during fit when we have when we are training and learning from a feature matrix x we use a first transformator transformator one and invoke fit transform to obtain a transformed version of the data and then we take our second transformer also apply fit transform with the result of the first transformation and finally we will obtain a transformed data set that was transformed by several steps it can have an arbitrary number of transformers in the predict when we have already learned the properties of the data like in our example the mean and the standard deviation we can just invoke transforms and get a transformed x in the end from building a pipeline in scikit-learn we can build them pretty easily there is a make pipeline function and we pass it transformer objects and it will it returns a pipeline object and a pipeline object itself is a transformer that means that it has the fit and the transform method and we can just use it instead of our number guessing scalar that I just presented so we could go back and rewrite this class into two classes one doing the scaling and one doing the number replacement or the question is maybe there is actually some someone has solved this for us already and indeed Python has batteries included and scikit-learn has batteries included so we can actually also use two transformers from scikit-learn's library one of these transformers is called the imputer because imputes missing values and so here number would be replaced by the mean and then we have the standard scalar that scales the data that is distributed in this example represented by the red distribution to one to a data set that is distributed around zero and these two transformers can be joined by a pipeline so here you can see this we just put together the building blocks that we already have we saw make pipeline we use make pipeline here and pass it a imputer instance and a standard scalar instance and then if we fit transform our example array we can actually make sure that our assumption holds true that we would like to have a standard deviation of one we could hear also check for the means and some other tests we have wrapped the data processing with those scikit-learn transformers and we've done this in a way where we can individually test each building block so assume that these were not present in scikit-learn we could just write them ourselves and the tests would be fairly easy and yeah I think that this is the biggest gain that we can have from this so if you're leaving this talk and you want to take something away with it something away from it if you want to write maintainable maintainable software if you want to avoid a spaghetti code and your numeric code try to find ways how to separate different concerns different purposes in your code into independent composable units that you can then combine and you can test them individually you can combine them and then you can make a test for the combined model and that's really a good way to structure your numeric algorithms so in the beginning I showed you an example of a machine learning problem where we just used a machine learning algorithm with an scikit-learn estimator that we fitted and predicted with now I extended this example with a pipeline that does the pre-processing make pipeline we use the imputer we use a standard scaler and we can also add our estimator to this pipeline and now our object S does contain our whole algorithmic pipeline it does contain the pre-processing of the data and it does contain the machine learning code and also it does contain all the fitted and estimated parameters coefficients that are present in our model so we could easily serialize this estimator object using pickle or another serialization library and store it to disk or send it across the world into a different network and then we could load it again restore it and make predictions from it and so to summarize what scikit-learn and these interfaces can do for you and how you should use them we found that it's really beneficial to use this these interfaces that scikit-learn provides for you if you want to write pre-processing code and you can use the fit transform interface for the transformers use them write your own transformers if you don't find those that you need in a library if you write your own transformers try to separate concerns separate responsibilities estimating or scaling your data has nothing to do with correcting other number values so don't put them into the same transformer just write to and compose a new transformer out of the two's for for your model in the end if you keep your transformers and your class is small they are a lot easier to test and if tests fail you will find the issue a lot faster if they are simple and use the features like serialization because you can actually quality control your estimators you can store them you can look at them again in the future it's really handy and in the short time I was not able to tell you everything about the compositional and the testing things that you can do with scikit-learn so I just wanted to give you an outlook on what else you could look at if you want to get into this topic there are tons of other transformers and other meta transformers that compose in scikit-learn that you can take a look at for example a feature union where you can combine different transformers for feature generation and also estimators are composable in scikit-learn so there's a cross validation building block the grid search that actually takes estimators and extends their functionality so their predictions are cross validated according to statistical methods so I'm at the end of my talk I thank you for your attention I'm happy to take questions if you like and if you also if you want to chat with me talk with me you can come up to me anytime hi hi could you please describe your testing environment to use a like a standard library like unit tests and like that too um well basically we we use unit testing frameworks like unit tests or pythes I personally prefer pythes as a test runner and we structure our tests or structure the tests like we would use unit tests in other situations so in the most basic form testing numeric code is not fundamentally different than testing other code it's it's code it has to be tested you have to think of inputs and outputs and you have to structure your code in a way that you don't have to or that in most cases you don't have to do too much work to get a test running and so yeah we have some tools to to generate data and to get more tests that are more going into the direction of integration tests but in general we just use the python tools that non data scientists also use other questions yes that is so if I understood the question correctly the question was if we also apply the transformations to the test data so you're talking about the data that I passed to predict right in the first example not the one that you used for the training the one that's yeah so sorry here you're talking about yeah exactly yes we do this is this is the purpose of splitting in the transformer into those two methods so just pull up the slide again the whole purpose of splitting fit and transform here is that we can repeat repeat this transformation in transform without having to change values for them those estimated parameters mean and standard if we would execute the code and fit again then we would not get the same kind of data into our algorithm that the algorithm expects any other how do you track your model performance over time so in some of our applications we have like data going for for years and we have models that are built up and then for instance that model the assumptions underlying probabilities of the data so we're using mostly be Asian models and the underlying probabilities are changing and we want to revalidate to see how on previous datasets or versions of datasets how the models that are overfitting or underfitting depending on what we have so are you doing anything across versions of datasets to make sure that you know your assumptions aren't missing stuff or adding a new stuff that you have didn't have before okay so you're asking how we actually test the stability of our gene learning models well this is done with cross validation methods and we have for sample datasets we have reference so reference scores and if the reference scores are going getting worse in the future then tests fail basically and then if that happens one has to look into into things why why things are getting worse there's not really a better way than using cross validation methods yeah it's more of a monitoring thing so this talk was more about actually testing testing the code whereas your question was rather about testing the quality of the model so I think these are two different concerns yes they're complementary yeah definitely so I just got curious when you do this what you're working I mean you're working on ipython notebook or do you do it as separate scripts or what do you what do you do for this yeah I'm personally not using ipython notebooks that much I just use I write tests in test files and execute my test runner on them and then use continuous integration and all the tooling that is around unit testing yeah I personally well ipython notebook is no environment snow I much that is really great at exploring things but it's not a environment for test-driven development and so there's no test run an ipython notebook and I personally think all the effort that I put into thinking about some test assertion that I could type into an ipython notebook if I put it into a unit test and check it into my repository it's done continuously over and over again so I really prefer this over extensive use of ipython notebooks I do use it if I want to quickly explore something this is just an add-on so no question your talk was about the testing stuff and this is really great with this modules let's say small units but of course it's also important to have reusability then because then you can really yeah change model or apply to different problems reusing parts of your pipeline any other questions okay thank you thank you very much
Holger Peters - Using Scikit-Learn's interface for turning Spaghetti Data Science into Maintainable Software Finding a good structure for number-crunching code can be a problem, this especially applies to routines preceding the core algorithms: transformations such as data processing and cleanup, as well as feature construction. With such code, the programmer faces the problem, that their code easily turns into a sequence of highly interdependent operations, which are hard to separate. It can be challenging to test, maintain and reuse such "Data Science Spaghetti code". Scikit-Learn offers a simple yet powerful interface for data science algorithms: the estimator and composite classes (called meta- estimators). By example, I show how clever usage of meta-estimators can encapsulate elaborate machine learning models into a maintainable tree of objects that is both handy to use and simple to test. Looking at examples, I will show how this approach simplifies model development, testing and validation and how this brings together best practices from software engineering as well as data science. _Knowledge of Scikit-Learn is handy but not necessary to follow this talk._
10.5446/20214 (DOI)
Atton, weiß. I can't. I got I've told you about property based testing. F trademark and I'm calling this tongue and cheek failure, seeking missiles. So I'm here to change how you test your code. Let's see where the audience is, is starting point. Who's hands up be written test. OK hands up. If you've only use how coded values as example data to pass into your code for enough. Hands up if you use some sort of random input. Good. Hands up if you've used property based testing. And hands up if you've done any ffuzzing. Okay, good. So yeah, some good things to learn about today then. So I'm here to ask you, are you testing your code hard enough? Are you stretching it? Are you asking it questions it wasn't expecting? Are you an aggressive interviewer? Are you a softball interviewer who asks the easy questions? She wasn't expecting that question. Maybe you should ask your code some questions it's not expecting. So what are we trying to achieve with our test input data? Possible goals, well, to start off with you want to cover the happy path, that's the one that's going to earn you some money and get the job done, but that's the absolute minimum. You probably want to cover all of the code base. You want to cover exception handling and the validation when the user passes you some data that you understand not to be valid. But what about unhandled exceptions? By definition you're not expecting them. So how are you going to think up examples for that data? If you knew what they were you probably would have caught them already. And then not just covering each line of code but you really want to cover the independent paths through your code base. It's going to start sounding like a lot of work, isn't it? So I'd like to just ask you to take a moment to think about the data that you pass into the test that you've written. This is to help you contemplate. So which type of points do you pick when you're writing your example data? And where would an adversarial approach take you? I'm not an artist but Google image search is quite handy. So this is an artist impression of the central parts of the input space here, maybe the obvious examples, Jane at example.com. But maybe you're under testing some more difficult examples. I've certainly seen unicode errors that would have been caught if example data had included a unicode snowman or something like that. Also just passing in empty lists, empty strings, it's good as a base standard for edge case testing. So how do we create test data? We can write hardcoded values like most of us have done. We can create purely random data with no feedback. ModelMummy does this, just gives you something conveniently random that will get the job done. Or by firing a failure seeking missile. Boom. Hard core, let's take a closer look. So quick check is a Haskell library. Don't do it just yet. So it's been around for a while. Hands up if you've heard of it. Cool. So this is property based testing. You specify property of your code that must hold. And quick check will quickly check if it can prove you wrong. It does its best to find a counterexample. So this is a little bit of Haskell. So the basic thing to take away here is that two lists of integers go into a property and a boolean comes out whether this property holds or not. So it's about reversing lists of integers. So basically imagine four integers, list of four integers, and a fist of four into fingers here. And it's saying you're going to join them and reverse them as opposed to reversing them and joining them. So it's a slightly dubious proposition, but we're going to see if it can prove it wrong. And at the bottom there, you can see zero and one as the inputs after some shrinking, and it has proved it wrong. But this isn't Euro Haskell monadcon. So let's not accidentally learn too much Haskell. Instead, let's find out what functional language developers think of our world. This is a direct quote. In an imperative language, you have no guarantee that a simple function that should just crunch some numbers won't burn your house down, kidnap your dog, and scratch your car with a potato while crunching those numbers. Fair enough, we like Python anyway. So hypothesis is the Python version of QuickCheck. It's more of an update because it adds some new features. Let's delve into the kidnapped dog world of Python. So just as a reminder, this was the Haskell version. In Python, there isn't a function that reverses lists that I've made on here. And in a similar way, you can see the act given decorator here specifies two lists of integers which map to the two inputs to the test function. And then at the bottom, you can see the property is defined just with a standard assert. We're running it with a pytest runner, and there's a little tiny hypothesis pytest plug-in that just helps you see the output a bit clearer. So it proves it wrong, and it actually comes up with the same counter example that QuickCheck did. And we didn't have to think up any example data ourselves, so that was an improvement. So what's going on here? How could this be working? So maybe it's doing a formal proof in the background. Maybe it's doing some sort of static analysis. Maybe it just passes a symbol into the top of the program, looks at all the manipulations, ends up with a formula, and then solves that formula. Even for mathematicians, they haven't quite got there yet. I tweeted a really interesting article called Will Computers Redefine the Roots of Math, a wired article. But no, they're not here yet. Mathematicians still have a job. And same in computer science, they haven't really managed to, especially not in Python. So that leaves us trying a crad time of examples, also known as fuzzing. That's what's really happening. Let's have a look at the dirtiness under the covers. Okay, so this is the first list of integers that hypothesis is sending in. If I go over here. They're pretty nasty, and it turns out that proposition is false. So it's proved it wrong in the first hit, but it doesn't want to show you that, because that's kind of ugly. I don't think that would pass code review if you tried to put that as a hard coded example. So it has a go at making it simpler. As we scroll down here, you can see the first list is getting shorter. So you've got three items now. It's worked out that big numbers are more annoying than little numbers. So those numbers at the top there are getting shorter, just two numbers, one number, keep on going. And the second list will get shorter as you get down, and it's getting it. It actually overshoots. It gets something so simple that when you reverse it in each way, it's actually true. So that's bad. Doesn't want to show you that. Simple, simple, try some empty lists, and this is the simplest one it could come up with. And then that's the one it ultimately shows you. So that's the one you copy, paste into your deterministic code boat into your test set. As an aside, if you run that again, it won't go through the whole business of those massive list of integers because it's got a local database of successful examples. So that's mainly for a speed enhancement. But also, if it's searched really hard and maybe there's a bit of luck that it's found a counter example, it wants to keep on to that because it might not find it next time theoretically. So what's going on here? It's generating random-esque input data. So it's not purely random. It's random but with a view to breaking your code. It runs the test repeatedly, so it's really worth bringing mine. This is not like a standard unit test that just gets run once. That at given decorator means that test actually gets run by default 200 times or at least until a falsifying example has been found. If it finds a counter example, it will then try and shrink that for the best of its abilities just to give you the cleanest, simplest counter example to prove your property wrong. So let's have a look at that random-esque data. Where did the integers come from? So this was the decorator. So this is strategies.list and strategies.integers. And the integer strategy is made up of two parts. The random geometric int strategy is basically saying, give me smallish numbers like maybe zero, maybe minus one, maybe 17 will break your code. And the other one, the wide range, says, well, I'm going to give you basically any random number from like anything that your Python interpreter can handle, say massive integers, and just maybe that will upset it a bit. These are strategies to relentlessly, they are relentlessly devious plans to break your program. And the list strategy is to say you pass it the elements you want in your list, and it averages at 25 elements, but you can set it to maximum size, minimum size. It's got sensible defaults, but you can override them if you need to. So Raymond Hettinger tweeted this, calling it the, so you might not know what the percent does, it's the remainder upon division. And he has suggested two properties that should hold. The result should have the same sign as y, and the absolute result should be less than the absolute on y. Okay, well, let's check. This is how we would write that. So there's no list of integers, there's just two integers here, and they relate to the xy inputs to the test function. So we've got a new function here, assume this is a way of giving feedback back to hypothesis, and it says, A, if this assumption proves false, in other words, if you give me a y that's zero, stop the test, it's not appropriate, and don't give me any more like that. Okay, and so in that way it's guiding itself to be more helpful, give you inputs that are more likely to be relevant. So we calculate the result, and I mean, I had to create a same sign function, but apart from that, it pretty much reads as English or copy and paste from the tweet. Let's see what the answer was. It passed. Okay, I should know better than doubting Raymond Hettinger, but I can and will property test his tweets. How does it do it? So the data strategies are probability distributions to pick elements from within that distribution. The guided feedback, I assume. Shrinking of counter examples to be clearer to read and easy to understand why they're breaking your property. And a database of failing examples for speed, especially when you're doing TDD, if it finds something wrong with your code, you've broken a property, you can have a go at fixing it, and it will try it straight away again until you make it pass. The internals of hypothesis are really interesting. I won't explain them. They use a parameter system. It's worth having a read up. He's got a good page on the documentation about that. Let's look at one more strategy. So we've seen integers. The floats are a bit more complicated. If you claim that your function accepts floating point numbers, it's going to do all these mathy-sounding things, Gaussian, bounded, exponential, maybe just some integers. You weren't expecting that, were you? And then some nasty floats are taught as well, say, you know, zero, minus, whatever, minus infinity, positive infinity, nan. You can assume these away if you don't want them because if you're doing maths, they will probably break your code. There's some great advanced features of hypothesis. It makes it very easy to take the built-in strategies and make your own strategy. Say you've got a function that accepts a comma-separated list of integers. You could map a list of integers, have them joined by comma, and then pass that into your code because you want your test data to be relevant to your test. It can't just all fail at the first hurdle because it's too random. So you might want to build your own strategy like that. There's plug-ins for Django, a bit like Model Mummy or Factory Boy, and NumPy as well, that's prototype. There's also a bit experimental, but a stateful testing where you give hypothesis the controls to your program, and it tries to find a sequence of actions which cause a test failure. So I don't know, this sounds very interesting to me. Then moving on. Let's look at another failure-seeking missile that's getting a lot of attention recently. So American Fuzzy Lop, this is a fuzzah, a second version. The first one was called Bunny the Fuzzah, so I think Michael Zalewski likes rabbits and they're certainly fuzzy. So it specialises in security and binary formats. Low-level libraries are essential to everything we do, whether it be accessing a database, image processing, encryption. So we'll get on to the Python AFL in a minute, but just for a moment, let's think at the C level. So just to remind you of fuzzah is something that fires data at your program, attempting to crash it. So we've kind of moved on from property-based testing. This is more about crashing your code than specifying properties. So these are things that you want to leave running for a good while, maybe on multiple cores, and speed is very important because the more ground you cover, the more likely you are to find some interesting inputs. So fuzz testing has been around for a couple of decades or more. There's fuzz backwards, zuff, that people have been using for a good while, and AFL is kind of a new style with some guiding going on. But traditional fuzzing is not dead. This is very important. AFL might be the new cool thing that came out last year, but Google has been running fuzzing against ffmpeg for a couple of years and found a thousand bugs. Literally a thousand commits fixing those bugs, so not to be sniffed at. If you don't know ffmpeg is a video processing library, it's in Chrome, it's probably in your local video player that you have on your Ubuntu desktop. So the strategy was to take small video examples, small video files, mash them together with mutation algorithms, maybe splice them together, mutate some bits or bytes here and there. And then admittedly they had 500 and then eventually 2,000 cores over a period of two years, so maybe not just your laptop. But they found they made great progress. A lot of memory management bugs were found. Actually I was speaking to one of the ffmpeg developers last night, and I can confirm it's not just because they've written awful, awful code, it's just because this is quite a hard thing to do. The video specifications can be 600 pages long, and they have to write very, very fast code, that's why they don't write it in Python. They have to look after all their memory management, and it's very easy to not do that perfectly. Now there's a quote on that blog post where they thought 20 to 30% of the problems found could easily be exploitable. So there's 100 to 200 zero day exploits that they found with that ffuzzing. Something tells me that local security services and hackers, this would be a good approach they might be doing, let's hope the good guys find these bugs first. So AFL's goals, it does need to be very fast because it's got a lot of ground to cover, it needs to be reliable because if it breaks overnight it's not going to get much done. I think in the past ffuzzers took a lot of configuration, but this one tries to be very simple, not require much setup, I'll show you in a minute. And it does the things traditional ffuzzers do in terms of taking some sample inputs, mutating them, but it also has a little secret which is it adds compile time instrumentation. So this means that when you compile your C code at each branch point it adds a little hook that records the path taken through the code. So this like we saw earlier about code coverage and taking independent paths through the code, it's able to get feedback on where its test data travels through the code base. You might be using GCC before, replace it with the AFL version. So here's a toy example, we're literally just reading 100 characters from standard in and the bug we're simulating is if the input is foo, then we're going to blow up. So it's a toy example. Let's just compile that. So there's no configure here, we're just compiling it. And when I echo into the program there, I've got some print statements, so it said 1, 2, 3, 4 and it did a port. Okay, so it works. Let's try ffuzzing it. So in the, minus size the input directory, so I've got one sample input which is literally just one file with one dot in it just to say here's something to get going on. I'm not going to tell you what the answer is, see if you can get to the answer. There's an output directory where the results will end up and it's worth saying if you're on a laptop like me, you probably want to use a RAM disk because it's going to do millions of writes and you might have your SSD stop working quicker than you thought. And it's just going to fire this test data into the standard input of our program. So this is the dashboard you get with AFL. So draw your attention, I've got a laser here. So up here we've got total paths and unique crashes so it's found four paths through the code base which is those if statements. The strategy yield down here, this gives you a sample of some of the mutation operations it's doing so bit flips, byte flips, arithmetic. We haven't given it a dictionary here but I'll show you one of those in a minute. My notes have disappeared. Secret. Okay, recovered. Right. And the other thing to show you, it's done almost a million runs in what's that, two and a half minutes. So within the findings directory, you get a queue of interesting inputs that it has found take a different path through the code base. So it started with that dot I mentioned, which was my sample input. It's found that after manipulating that, it found one that started with an F. So it's clearly trying thousands of thousands of examples but when it happened to find one that started with an F, it took a new code path so it recorded that for reuse. Similarly, FFO. So it's kind of using, it's kind of stepping up, making it further through the code base each time. And in the crash directory, it's found an example input that crashes the code. So this is exactly what we'll be looking for. So let's just have a look at that crash file. So it has a kind of long file name but it records, this is where it's recorded what's happened. So it's told you signal six is a sigabort. We did the abort so that's expected. It tells you that it's based on the third item in the queue. It's done some eight-bit arithmetic. In other words, it's replaced that Y with the Ymlau with an exclamation mark. So you can kind of see how it's working. You can see how it's manipulating previous inputs. So it's able to stand on the shoulders of what it's achieved so far and get one step further. So in the last year, it had a very, very impressive trophy case here. This is about a third of the list, by the way. So there's security libraries, image libraries, SQL libraries. You name it almost. But generally focusing on libraries that have, can be random input but also you've got bash there. You have to give it some more help when you're doing a kind of non-binary input because if it just fires random characters at SQL light, it's not going to get very far. So let's have a look at a specific example, SQL light. It's worth saying SQL light is a very, very well-respected library in terms of testing. It has already been ffarsed by a traditional ffuzzer. So you might think there's not much low-hanging fruit there. The approach taken was to start with a dictionary of SQL keywords. So you literally just put these kind of one per line in a file. They grept out some hand-written test cases from the SQL light test base. And they found 22 crashing test cases. This is one of the simpler ones. So this ends up arriving in a function with an argument not initialised or something like that, or a zero-length argument where it was expecting a list of one or more things. So these were able to be fixed. So how does it do this? Let's just see an overview. It is a great traditional ffuzzer and you can use it without the instrumentation. It will search for inputs that span different code paths and it uses genetic algorithms to mash together the examples it's seen so far, as well as just mutating those examples one at a time. But you can imagine it's searching the input space. And it's got some help. It's got some guiding by the instrumentation. But it's always going to be a slow process because the input space can be massive. It certainly can't just go A, B, C, D and do like an exhaustive list of all inputs. That will take forever. So let's have a look at ffuzzing C Python. It's worth saying that obviously the innards of Python are written in C. So this is a different proposition to ffuzzing Python code, which we'll see in a minute. So you can download Python source, MacUriel. You can compile it very similarly to, as the Python docs tell you, but using the AFL, C-lang compiler. And you can start ffuzzing it. So I've got a sample input at sample target program here. So I'm not a C-types expert. So I can't explain that line, the magic line that you need there. But that connects things up. And you're literally just passing standard in to JSON load. It's treating us as a file. And we're not catching Python exceptions here, but we're looking for exceptions that happen in the C code. So I ran this overnight the other day for eight hours. It didn't run any bugs yet. Maybe it shouldn't be so surprising, but it was that easy. But it did run 121,000 times. It is a lot slower than just running the toy example earlier because it's loading the whole Python interpreter, et cetera. But there's tools within AFL to make this faster and easier. So they have a fork server, and they have various techniques you can use to make things faster. It also gives you some hints before you start about putting your operating system into performance mode instead of power saving mode. So you could say this is more ethical than causing global warming by mining Bitcoin. Certainly it can cause laptops to get a bit hot and CPU intensive. Let's move on to Python AFL because this is not Euro C null pointer exception conference. So this uses siphon to connect the C layer and the Python layer. It connects the instrumentation that we mentioned to the Python interpreter via sys.setrace. So every scope that's entered will log a little waypoint as traveling through the code base. And your unhandled Python exceptions get converted to SigUser1, which AFL will recognise as a signal. And you can see here Py AFL Fuzz is just basically type pi before AFL Fuzz. So it's literally just as easy to use. So here's an example Alex Gainer did of using this to fuzz the cryptography library. So it's pretty simple. You have a little AFL.start hook there that connects things up. And he's literally just passing standard input to decode a signature. He said it was fruitful, but I don't know if he listed any particular bugs that he found. But this is the general approach. So what are some interesting questions raised by these two libraries? So in default mode, hypothesis and AFL Fuzz will give you new input data every time you run them. So this could be considered annoying to some people. They want to know if a new commit fails their tests. And if you've got different test data every time, well maybe it failed because it found a different test input rather than your commit causing it to fail. So consistent pass or fail, some people insist on. On the other hand, you might find more bugs. So that would be handy as well. So I think the resolution between the two is to do the nondeterministic testing, maybe not in your per commit testing, but to look for the country examples it pulls out and copy those into your deterministic test pack. Or just live with the nondeterminism and find more bugs. That's what the author of hypothesis recommends. But you can put it in deterministic mode if you insist. So we've been thinking about random inputs. One way to think about this is if things are too random, they won't even get to the starting gate. They have to be relevant to your code. On the other hand, you can't enumerate all the examples because there's too many. Your space is often just too massive. If you give a happy path sample input and you don't mutate it enough, you're just going to go straight through the maze and come out the other end and you're going to think everything's fine. So you want to have that sweet spot where you're reaching your dead ends of your code base, dead ends of the maze which represent all the paths through your code base, but not having them all fail at the first hurdle because they're too random. So which library should you use? If you insist on just using standard unit test, you should at least try to be more adversarial, but I would say you can't expect the unexpected. So you should probably use hypothesis if your input data is Python structures. And even if they're not built in data structures, you can build your own strategies. It's quite fun as well. And also you don't have to think up the hard coded examples. So people say that the test code base becomes easier to read because they're not distracted by the specificity of bob.atexample.com. It just says this function takes all strings or all lists of integers. And something like AFL or Python, AFL, if your inputs are binary or as we saw, maybe you're parsing text input like an SQL library. So in conclusion, we've seen two styles of test data generation. Humans are generally bad at picking random examples. Developers are bad at being adversarial to their own code bases which they understandably love. Computers are fast, let them play with your code. And find more bugs before your customer or the secret service does. Let me end by saying, don't interrogate your code base like it's a fluffy bunny stuck up a tree. Fire a guided missile, blow the branches off the tree and clear up the mess. It's not just me saying it. Celebrity endorsement. Just a little reminder there. And also of interest, you don't even have to get up the talk after this one, which will be more informative and better presented by more at Scromback, who I haven't talked to yet. I hope I didn't cover all your points, is in this room directly afterwards. And there we go. I've been Tom Byner. Any questions? Thank you, Tom. Thank you. Thinking back to Gido's talks yesterday, is Hypothesis Python 3 ready and could it be made to use the type annotations if they were available in the code? I think it is Python 3 compatible. But I don't know about the type hints. You'd have to ask the author of that library. Yes, sounds interesting. Thanks for the interesting talk. How does Hypothesis handle if the code under test exhibits some randomness itself? Easy voluntarily? May be a Monte Carlo algorithm or involuntarily by a mistake? Very good question. It will raise a flaky code warning. So it will tell you that, you know, its workflow is, you can suppress the warning, I think, but it basically tells you that if your code is non-deterministic, then, you know, you're less likely to find helpful results, you know. So you may want to put your code into deterministic mode itself or take another approach. I just wanted to point out, sorry, if you want to help making libraries, especially C libraries, more secure, there is a project called the FOSing project by Honno Book, which helps you with some documentation to get started with AFL and FOSing your favourite library. So that's a very good point to get started if you want to make things more secure. Can you just repeat the name of that? The FOSing project, I believe. Cool. Thanks for the talk. You mentioned that Hypothesis iterates over its tests many times to produce results. Do you run it as part of your standard unit test workflow? Do you run it somewhere else in your testing workflow? I personally would bite the bullet and use it in a TDD workflow. There was a talk the other day about Testmon, which uses coverage.py to only run the tests that are related to the code you're changing. So, you know, you could get a speed improvement from Testmon and then balance that with a slowdown from Hypothesis and just maybe end up where you were before, but with finding more bugs. I have a quick question. So, because Hypothesis is generating random input, is it a bit strange to use on a project where you have multiple developers because then doesn't each developer have different input into the test? Yeah. So, I mean, there are inputs non-deterministic to start with. So, you know, even before you had a local database of examples, everyone's getting different test inputs. But this idea of sharing the database of found examples, I think, is still a work in progress. I think a developer of Hypothesis is still trying to think through whether it makes sense, you know, to, for example, have that on your CI server or whether that's a bit of a non-starter. You can add another decorator to tests to give them specific examples to force it to give a specific example. So, if some developer found a certain input that was helpful, they could, you know, do a commit that hardcoded that as always present. All right. Please join me in thanking Tom once again. Thank you.
Tom Viner - Testing with two failure seeking missiles: fuzzing and property based testing Testing with purely random data on it's own doesn't get you very far. But two approaches that have been around for a while have new libraries that help you generate random input, that homes in on failing testcases. First **[Hypothesis]**, a Python implementation and update of the Haskell library QuickCheck. Known as property based testing, you specify a property of your code that must hold, and Hypothesis does its best to find a counterexample. It then shrinks this to find the minimal input that contradicts your property. Second, **[American fuzzy lop]** (AFL), is a young fuzzing library that's already achieved an impressive trophy case of bug discoveries. Using instrumentation and genetic algorithms, it generates test input that carefully searches out as many code paths as it can find, seeking greater functional coverage and ultimately locating crashes and hangs that no other method has found. I'll be showing how with **[Python-AFL]** we can apply this tool to our Python code.
10.5446/20213 (DOI)
Thank you and hello everybody. My talk is about, as Matt said, about continuous integration. A few words about me. I'm a Python web developer since more than 10 years and most of my professional and free time I spend on a project called Plone. It's an open source enterprise content management system written in Python. We have around 340 core developers worldwide and Plone powers websites like NASA, CIA, USA, Oxford, Brazilian government and many more. And since around four years I'm the leader of the Plone continuous integration and testing team. So we make sure that our continuous integration systems work and that our testing is in a good shape. So what's continuous integration? I guess everybody heard that term at some point. And it's in contrast to what many people think that is like a software that you can just install and then you do continuous integration. It's actually a software development practice like test driven development for instance. So the software development practice is about team members that integrate their work into a main branch of a version control system frequently. And each of this integration or commits or pushes or whatever is verified by an automated build and test process. And this automated build and test process makes sure that code violations, test failures or bugs are detected as early as possible and also reported as early as possible to your developers. We all know that those statistics are a lot of bugs, right? That the later you detect bugs the more they cause, right? So if we detect bugs early they're easy to track down and easier to fix. And one of the other advantages, big advantages of a continuous integration system is that if you run your build and your tests automatically on a continuous basis then you know that your development and also your deployment environment is in a working state. So as I said, there are three important parts in continuous integration. One is the first one is that you integrate frequently into your version control system that you have an automated build and test system and that you report. So keep those three items in mind because I will come back to that. Our first approach in the Plum community to continuous integration was actually Buildbot. Who here knows what Buildbot is? Oh, quite a few people. So Buildbot is a continuous integration framework written in Python. We had it set up but it's quite complex and as I said, it's more framework than an out-of-the-box solution. So you can't just install Buildbot and it will do everything that you want. You have to really know what you're doing. And it's hard to set up so we barely used it. I mean a few hard core developers used it but it wasn't really run on a continuous basis. It wasn't really integrated into our version control setup and nobody really, as a regular developer you did not even notice or knew about it. And around four years ago in 2011, we introduced Hudson, what's now called Jenkins, into our process. And one of the developers who started to play around with Jenkins wrote that it's like Buildbot with a butler. So in comparison to Buildbot, Jenkins is really out-of-the-box solution. You just install it and you have to configure it a bit but then it basically works. So that was really nice. Also Jenkins comes with a nice user interface so everybody can just go there and check the status and stuff like that. Downside is it's written in Java and as a Python developer, you always prefer, of course, to use a beautiful Python software, right? But it's Java as a decent language and it's a very good software product in my opinion. It has a huge open source community around it with many plug-ins. It's backed by a company who offers commercial services on top of that called CloudBeast. And we're really, really happy with it. So during my talk, I will give you examples of what we do with Jenkins but it's not very specific to Jenkins. So as I said, continuous integration is a software development practice so it's about the practice and the rules that you have, right? It's not about the software that you actually choose. So when we moved from Buildbot to Hudson, things looked a bit better. But we used nightly builds. I guess a lot of people do that because your tests take quite a while and you don't want to run them on every commit for whatever reason and then you run them on a nightly basis, right? If everybody sleeps, then you can just run them for a couple of hours or whatever it takes and next morning you will get a report to your mailing list saying this is the list of commits and now the build is either broken or it's fine. The problem with that is that you don't run your build for each integration. If you recall the definition that I gave you upfront about continuous integration, the important part is that you run your build and test process for each commit because that's the only way to figure out which commit or which code change actually costs regression, right? If you have 20 commits from different people and next morning you will see, hey, the build is red, then somebody needs to clean that up and usually the person who cleans that up is not the person who causes the violation so it's costly to do that, right? And nobody does that. If you are in a company, you can force somebody like a poor guy or girl to fix the stuff for other people but in open source communities it's even harder because there are 20 commits and people say, hey, it wasn't me, right? My commit was really clean and perfect. If you run them on a nightly basis, you build this broken 99% of the time. That's at least my experience. So our software development and release process in the Plum community was like this. The build was broken 99% of the time and then before release, our release manager said, hey, guys, I want to make a release and then two or three of the 340 developers, the really hardcore guys, started to fix tests for everybody else. We had like 400 or 500 test failures. We have around 9,000 tests in Plum. So people really, we sat together like four day or two and we really fixed like a couple of hundred bucks before we could even make a release and then we started to make our 300 releases and then our release manager could make the actual release, right? So that's what it took when we had those nightly builds. So how could we solve that nightly build problem? You can solve that by following the rule that you have one build and tests per commit. So how do you do that? By default, Jenkins use polling to poll the version control system. Like you can set it to every 30 seconds or something and it polls it and if a new commit is there, it creates a build. The problem with that is you won't fetch all the, you will not get one build per commit because it could be that some like two people commit at the same time, then you have like two commits and believe me, those two people will say it was the other one. Always. So you have one commit and you make sure that you have one build for that commit. With today's version control system, that's really easy because GitHub has post commit hooks if you hosted on GitHub or Bitbucket or on your own, you have your own Git repository, you can create a Git post commit hook that triggers your instance or your CI instance and then you can have one build per commit. So you can trace the person or the commit that was responsible so it's really easy to figure out what goes wrong. In Plone, it's a bit more complex than that because we have those 300 packages and one checkout doesn't mean we have the exact same checkout of all packages, but I will come to that later. And then what's important is that you preserve this commit information through your continuous integration pipeline. So you pass it through the builds and also so that you can at the end notify people, right, via email or anything else. So we have those three steps, commit, build, notify and in order to be able to automatically build and test your software, you need an automated build. So we have tools for that in the Python community, right. In Plone, we use build out, it's not widely used outside the sub community, the pyramid folks use it, but most people use pip or easy install which are also fine. You probably need to like wrap them into bash files or anything like that. But you can automate your build, right. If you do that, you can use talks, for instance, on the CI system to configure what's run on the CI system and on the Jenkins machine, you can, for instance, use tools like shining panda, that's a Jenkins plug-in that allows you to create virtual ends or build outs and install things via pip automatically. So it's just a convenience tool. We're not using it in the Plone community because a bash script is enough, but if you want to do stuff with Python and you want a nice wrapper, then shining panda is the right tool for the job. So if you build, if you do your build automatically, you all, of course, want to use, you want to run your tests, right, because you want to make sure that your software actually works. If you use py test, you're lucky because you can just configure py test to output files that Jenkins can read out of the box. Jenkins is Java software, so it has, of course, an XML interface, but with py test is really easy. I'm not sure about other Python test frameworks. We have collective XML test report, which is the Plone wrapper about the Zulp test run. I will bother you with that. And then you can present those nice statistics about your failing or passing tests. And the same is true for test coverage. So you can use the coverage package and the Jenkins Covertura plugin to actually show that to your users. So you have a nice interface that you also can show to your project manager so he or she can track your performance and see if the build is broken. In order to make sure that your software is not only in a working state but also does what is supposed to do, you usually need acceptance tests, right? And I'm a web developer, so what you usually do is you write Selenium tests. We used that in the Plone community for a long time, but around five years ago we started with Robot Framework and that really gave us a boost when it came to acceptance testing. Robot Framework is a generic test framework with multiple plugins. One of those plugins is Selenium 2. So you can write tests in this nice BDD syntax, human readable, not only by programmers, and Robot Framework and Selenium will run those tests. And you have all the integration necessary in Jenkins as well. So you have a Robot Framework plugin in Jenkins or a Selenium 2 plugin that shows you all the nice outputs of Robot Framework or Selenium. The cool thing about Robot Framework is that it gives you a full trace back. If your tests fail, it goes step by step through it and it does an automatic screenshot of the last where the test actually failed. And you have all that in a nice output that you can access and see what fails, right? And we are also using a Source Labs, which is a software service that you can use to actually run your Robot Framework or Selenium tests on different versions. So they offer you all the versions that you could imagine because you don't want to set up your own Windows machine. We tried it, don't do it. Those services are cheap, sorry for the advertising or use any other service, but user service don't do that itself. We tried it. Then one thing that is especially important for Python because it's a dynamically typed language is static code analysis. So you're able to track possible bugs early. I guess you're all familiar with the tools, PAP8, PyFlex, Pylins. We created a wrapper in the Plone community around those tools called Plone Recipe Code Analysis to have our best practice testable. You can use that without Plone, but only within build out. And you can present all those, if you run those code analysis scripts, you can present them within the Jenkins violations plugin and it gives you also nice statistics about all your violations, not only for Python but also JSLint and all the modern stuff, CSSLint. It's all pluggable into the violations plugin. So you can really, really easily present all the information that you have to your developers or to your project managers or everybody involved. Then one of the things that is really important is notifications because people need to be informed as quickly as possible about regressions. And there are many different ways you can do that in Jenkins. The best way or the way that is most widely used is via email and there's an extended email plugin for Jenkins that allows you to define rules which people you want to notify. So you can say, if the build breaks, then I want to notify this mailing list and that and if the build is still failing, then I want to do this and that. So you can really define all the rules that you want. Usually if you have a larger organization, you want to hook Jenkins up with LDAP. It also comes with a plugin for GitHub, for instance, or Bitbucket. So you can use the authentication with that. That's really nice. That's the cool thing about Jenkins that it has such a huge community that you have plugins for everything. And you also want to show the current status to your users. So you can use the Jenkins dashboard plugin to have a nice dashboard or you can even build your custom frontends. It's all there. You just have to choose. So in the Plon community, we set up everything that I just presented to you and we ended up with this still. So why is that? I mean, we put lots of effort with a lot of people into that and we built it all by the book and the build was still broken. Why is that? I mean, there are two reasons, actually. One of the reasons is that for Plon is hard to have this one commit, one build thingy because we have those 300 packages and if you do a checkout, then it checks out those up to 300 packages and you can't be sure that this all happens in a timeframe before somebody else comes along, right? So that's pretty specific so I won't go into that detail but that's a problem. As soon as you have two people that could be responsible for something, they will point to each other and say it was the other one, right? That's always the case. And then the continuous integration and testing team needs to clean up and figure out what went wrong and after that you can point at those persons and say, hey, it was you but I had to clean up your stuff anyways. The second thing that is not specific to Plon is that people break the build and they just don't care. I mean, it's not because they're evil, sometimes you just want to do a quick fix or anything or you do a commit and you think that can't possibly break anything, right? I just did that like two days ago and a good friend of mine then just, it took him like two or three hours to fix my stuff because it wasn't obvious because the commit really looks perfect and then he wrote in the GitHub commit message that he wants to kill me and was like all my fault because I was tired and I just went too bad instead of like waiting for Jenkins to pass. So it's not bad people but sometimes those things happen, right? You break the build, maybe you don't check your emails or anything, our build takes still around 40 minutes so people break the build. So how do you prevent that? As I said a couple of times before, continuous integration is a development practice. So what's maybe even more essential than a good software that helps you with that is actually that you practice that, that you have agreement on a team. And I think we gained a lot of experience with that because we have like those 340 core developers that's actually from our last year's conference in Brazil. We have over half a million lines of code. We have over 300 core packages so we have quite a complex software and like a huge team of developers. It's not like a company where you can tell somebody to do things, right? So we need some agreement on the team how to like keep a green build. Fortunately, some smart people already thought about that and came up with a few continuous integration rules or best practices that allow you to keep a green build. The most important one is do not check in on a broken build. The most important one is do not break the build because that will not happen. People will break the build and it's okay to break the build. It's just important that you don't check in on a broken build because if the build is broken and somebody else comes in and checks in, then things get complex. You get more test failures and you can't figure out which commit was responsible and then people will point at each other and say it was that guy. It wasn't me, right? And then things will become complex. So what you should do if you break the build, the team should stop, the entire team should stop and start fixing the build because you have a real regression, right? Your software is not in a working state and nobody can commit if they take this first rule seriously. So the team should stop and work on that. Sometimes that's not working. Then it's also fine to just revert to your commit. Sometimes it's obvious what you can do to fix it and you can just fix it, but there should be a time frame where you should fix the bug within that time frame, right? Because otherwise you will block the build. But if you do that, if you stick with those rules, you can actually get a green build most of the time, like not 100% of the time because people will still break the build. This is what CI is for, right? Our tests take quite a long time to run. If you run them all not in parallel like we do on the CI system, but sequentially, then it takes more than one and a half hour to run our tests. And you can't expect everybody to run all those tests, right? So people should use the CI system to break it, but not for long. So if we go with the continuous integration rules and have our setup, we have proved that our software is in a working state all the time. That is pretty cool for our developers because if developers do a checkout, they know that the software works, right? Before that, they checked it out, wanted to fix something, and they had like a broken build, so they had to fix something else. That's frustrating. We could make faster releases because our release manager did not have to ask the two or three hard core developers to fix all those bugs for a day. He can't just make releases because our build is green, right? So you can deploy it any time. Just a few remarks about additional things that you could do. Scalability is important. You should definitely, if you have a larger project, consider using a server node setup for Jenkins, which Jenkins allows you to do. So otherwise, if you have a lot of jobs running in your Jenkins machine, then your UI will freeze because the server is busy. So do that on the nodes. Use provisioning. There's nothing worse than a CI system that does not work reliably and behaves differently on the nodes. And you can use the Jenkins port allocator plugin to run things in parallel because this is what you want to do. Then if you have your CI system in place, the next step would be continuous delivery, not continuous integration. With continuous integration, you automate your testing process and your integration process. With continuous deployment, you automate your deployment. The idea is that for deployment, you just have to push a button more or less and automatically you will deploy. A lot of companies do that these days. Jenkins grew from a CI system to actually a system that can do CD as well. And we also started to work on that. We're using Zest Releaser, for instance, to do Python ac releases. It's an awesome package. If you do ac releases by hand, stop and use Zest Releaser. It's perfect. It's a really great piece of software. You can use DevP, for instance, to make ac releases or wheels releases to test your deployment. On the Jenkins side, there's a new plugin called Jenkins Workflow plugin. It's a game changer in CI, in my opinion. It allows you to create really sophisticated workflows within Jenkins to run certain steps in parallel or sequentially and notify people. It's incredibly flexible. I already played around with it and we definitely plan to move to it. So if you start with Jenkins, I would definitely check it out. It's really awesome. So to summarize, if you have a CI system and you integrate frequently, you have an automated build and test system for each integration and you report as soon as possible, you can get a green build like most of the time, which gives you a proof that you have a software and working state that you can deploy at any time. You can ship software faster and better. It's more fun for developers, not frustrating for them because they run into failing tests. And Jenkins in the last four years has been great. It's like you have plugins for everything. It's a great piece of software, even though it's written in Java. So yeah, use it. If you want to know more about continuous integration, I highly recommend that book on the left side called Continuous Delivery from Jess Humble and David Fairley. They came up with those continuous integration rules. There's another book called Continuous Integration from the same publisher. I would recommend to buy this book because it has everything and the continuous integration chapter in that book is really great. I bought two, both of the books, and buy this one. You don't need the other one. And on the right side, this is a blog post. There's also below, where the URL is below, where I wrote a blog post about our CI setup with all the plugins that we used and all the approaches. So that's more in detail. If you have any questions, feel free to ask me on Twitter, on IRC, on my blog. The slides are there. And thank you. Thanks, Timo. We have time for two questions. If there's anyone who has a question for Timo, put your hand up. Thanks. Hi. First, I wanted to say that with NOS tests, you can also output XML, which can be interpreted by Jenkins and displayed in the web UI. And my question is, what do you do with flaky tests? With flaky tests? Flaky tests? A test that sometimes fail. You can't prevent. Yeah. So that's hard to do. What we usually try to do is to make them work reliably. And if they don't work reliably, we remove them. Because personally, I don't think it makes sense to have a test that fails randomly, because that doesn't give you any information. If a test fails randomly, it's to new use. Because if it fails, it gives you no information. If it passes, it gives you no information. So we try really hard to make it work reliably. Test is especially important for Selenium tests, because the underlying technology is fragile. But you can make it work reliably. And Jenkins helps you a lot with that. Because if you run things in parallel, then you will see all kinds of effects that you don't see on your local machine. You have to make sure when you run Selenium tests that you make sure that everything is there, because tests can run slow and fast. And it's not easy to do. But in my opinion, it's worth the effort to have reliable tests. My question was that, could you quickly comment on how often developers step on each other's toes when you have so many repositories? And developers, does it happen often? Do you regret having split them out instead of having them in one Git? Or do you use Git sub modules? Could you please comment on these things? That's the big question that we always ask. I mean, we split our, we had a big monolithic software blog, and we split it onto multiple packages, multiple repositories. And it's really great if you can, as a developer, pick things and improve certain packages without having to download everything. So that's a great thing, and we don't want to lose that. On the other hand, we see the amount of work that is necessary to release and keep track of all those multiple repositories. And we haven't really solved that problem that you have one commit and one build. We are close, but we don't have it. So it's a trade-off in the end. It's hard to say. I don't think that we will go back to one repository approach, but I can see the advantages that you have. Yeah, that's possible. Yeah, that's possible. But then you still do a checkout, and then you can't be sure. That's basically the same. We are using actually Mr. Developer, which is a tool that checks out all the packages for you and makes sure that you have the right branches. It's pretty sophisticated, pretty cool, but it's complex, and we try to store known good sets of this. So we had for all our 300 packages, we stored the version numbers or the commit hashes and stuff like that, and we tried to make that reproducible, but it was just too complex. We failed at that. That just did not work. Great. Thank you very much, Timo. Great presentation. Yeah, great spot.
Timo Stollenwerk - The Butler and the Snake - Continuous Integration for Python Continuous Integration is a software development practice where members of a team integrate their work frequently, leading to multiple integrations per day. Each integration is verified by an automated process (including tests) to detect integration errors as quickly as possible. This talk will introduce the basic principles for building an effective Continuous Integration system for Python-based projects. It will present the lessons learned from building a Jenkins-based CI system for an Open Source project with a distributed team of more than 340 core developers that ranks among the top 2% of all open source projects worldwide (Plone).
10.5446/20212 (DOI)
Thanks and thanks for coming everybody. So the title of the talk is Smashing Up PyTest, CoveragePy and ASD.py to take TDD to a new level. Let me say just a quick thing about me. I've been a freelance programmer since the beginning of my career. Like 20 years ago, a couple of times I took responsibility for the whole project and hired a few subcontractors to deliver it. I chose on one project, I chose Python for the delivery. It was in 2008 and I didn't have to touch anything else since then, fortunately. At the moment I have five Python subcontractors in one office in Bratislava, working for one client on a long-term project and I'm looking for more clients. Okay I would like, I have a little survey, so survey time. So how many of you did write at least one automated test? Cool. Who has tested you longer than two minutes? How about longer than ten minutes? Okay, so some hands. Okay, eight hours. Okay, two hands. Who is getting a broken build too frequently? That's cool. Who is using NOS? How about PyTest? More people. So these are three user interaction limits. 100 milliseconds is the limit for user to feel the system is reacting immediately. One second is the limit for users flow to stay uninterrupted, even though they will notice the delay. And ten seconds is the limit for users until they think the system is broken and start doing something else. So computing adapted, developers have done a good job in making it all quicker in recent years. So you have to do activities in today's computing where you have to wait for more than ten seconds. But how about executing a test suite? It takes minutes or hours. It's 50 times slower than most of the other computer tasks. It brings intensive load on the computer, delays with other tasks, fan screaming. So what are the consequences? Users hate waiting more than anybody else as executing tests interferes with the workflow so much as no one under pressure or distraction. Some developer doesn't run the test or notice negative results and commits a failing build which makes the lives of other developers more difficult and sometimes starts a downward spiral. Broken test suite means error lifespan increases. The developers valuing, using and maintaining test suite the most are most punished and ways they waste the most time. I actually think that the test execution time is the single biggest flaw of automated tests idea as a whole. But how about running just affected tests? Majority of code changes are local so it's a waste to run the whole test suite each time. There is of course a solution, most of us have used. So developer thinks I'm changing just this module so let's just execute the related tests. However, it's quite cumbersome and reliable. Good luck being correct in this hand picking when the dependencies look like this. Also, one of the properties or one of the purposes of test suite is to discover a failure which you didn't think of being able to cause with the change. Influencing something you wouldn't think of. So let's explore the idea of affected and unaffected tests on a very simple project comprised of one Python file. Let me have your attention and look at this project and try to grasp it completely. For those who don't know, Python discovers and runs any methods which are called test underscore somethings so this constitutes a valid test suite including the code under test. And here we have a grid of dependencies between tests and methods of particular project. Whatever you do inside the subset body, there is no chance you will influence test underscore add. Have a look at the source code again. You can hack all the inside subtract and test underscore add is not going to be influenced. It never calls subtract. For test underscore add add to start calling subtract. Any of the methods it's already calling have to change which will trigger execution and will create an updated dependency matrix. So back to the dependency matrix. Out of six positions we have four crossed ones. So almost all of them. But on bigger projects the ratio is going to be much smaller. So there is a lot of methods that can be changed and only influence small ratio of the tests. Maybe you're suspicious about how could we track the metrics in dynamic language like Python. I remember feeling the same way when I when I hearing about coverage reporting for the first time. I thought it would be fragile, slow and unreliable. But no, it's very good. It's stable and widely used project. Creating a matrix on the slide is just a little addition to coverage by itself. It has the same limits. Obviously at the moment it doesn't work across technology stacks. You cannot track execution of C++ code or JavaScript triggered from Python. It also doesn't track data file changes. If that's input of your tests and it changes execution path you will get wrong results. But for the circumstances where it works it works very fine. Now let me show you a tool which automatically executes only affected tests on every file change. Three tests executed. I changed one method. Two corresponding tests executed. Another two. So if you want to be evil a little bit you can join two methods. So it was screencast so that it doesn't go wrong on the presentation. Look how it went right or everything went as expected. So the idea transferred into a tool is testmon or pitest testmon. Let's go briefly through the libraries which is based on coveragepy. It's a giant which allows all this after some initialization and executing of this code snippet. It's a tool that allows all this to be executed. Coveragepy is mostly used as command line reporting tool but the features of coveragepy which testmon uses are almost documented and almost part of the API. It doesn't sound very good but really there is just one or two undocumented attributes used and also not bachelor with the author of coveragepy. I reached out when we started with testmon that he would like to know if there is any obstacles and probably also fix them if something makes problems. And also, Ned would like to add to coveragepy the functionality to feature to track which tests or which methods are actually executing which lines. So it's exactly the information which we are tracking and this way it could converge or there could be a joint effort. So after executing each test with coverage, we are getting filenames and lines of codes triggered by the tests. From there we need to get methods which are executed and ASD from standard libraries could do that. And this is my first contact with syntax tricks. When I implemented testmon from the name you can imagine what they are, the syntax trees. I would say they are much more, they are not much more abstract than other things in programming so for me the word abstract is actually quite distracting. Testmon only needs to parse the Python source code and understand it enough to know where the line boundaries of methods body are. So ASD library is a little overkill for this but it's ready and it works in all versions of Python, also future ones and I was glad to learn the basics. If you would like to be interested in learning ASD a little bit more, I recommend the two resources there to study on the slide. See testmon is a Python plug in so far. Python seems to have more active community recently. And also here I was surprised to see more hands raised when you asked about which test runner you use but I know when I have the feeling there is a lot of projects using those and not switching anytime soon. So I'm interested in also porting testmon to knows and if anybody has experience in doing some advanced stuff in knows I would be glad to talk and maybe we need help with that also. One interesting aspect of the whole project of testmon was that I thought that this is a valuable tool which many people could use and it would save them a lot of time so I asked for money on Indiegogo and I didn't have any followers or blog or any other open source tools I used before or developed before so it was really difficult to get the word out but still the amount got collected so that was a nice and inspiring thing and I would encourage you if you have done something good for the community and have blog or followers and would like to dedicate your time to do some valuable tool, don't hesitate and use this route. I think there will be more of it in the future. So yeah, that would be nice. Okay, a little sneak peek into the future. I hate the way the test used results are presented on the console. I hate scrolling on the console and whittling through the stack traces so I was thinking about some better way to represent the errors and I think the best way would be or the best I can think of so far would be to present the errors inside the text editor you use. So something similar to a linter for example and this is a screenshot from atom. It has a very good way and easy way to add a linter and some code checker and in the combination with the interactivity of testmon where you can get the results really quickly I think this is the way for the future to make it as a linter. Of course it's not specific to testmon, it would apply to any testrunner but I think in the combination of the quick feedback and good presentation would be great. So this is the conclusion, testmon is awesome, use it please, give me feedback, share, tell your colleagues and so on. This is contact again and it's time, we have enough time for questions so I hope there's many of them. Please raise your hand if you have questions and we'll get the microphone to you. Hello is there any way to force some test to run all the time? For example I have a test that tests database trigger and I wanted to run all the time. Can I do this? Not yet, not yet. It doesn't have many features so let's see how it goes and it's also kept minimal so that it's easy to study and to prove the concept really works and then adding features is like the next step. Go ahead. You showed testmon running as a process that monitored a bunch of files but can it also produce a report that I can process on my own or does it have an API that I can use to tell me how source code lines map to tests so I can use that information myself? No, not yet. I guess that's the addition to coverage that I mentioned because if you have some, I don't know what would be, maybe can you give him the microphone back, what would be the use? Maybe we talked about it but maybe you can tell the use. Use case I have in mind was for my lightning talk yesterday, the mutation testing, right? One of the assembling blocks for that is it takes a long time during all the tests inside these loops so if you can determine exactly the tests you want to run based on some, you know, get diff or something like that then you can drastically reduce the run time for these kinds of things and make them practical so that would be where something like this could come in really handy for that kind of work. That's why I was wondering, I can't see how, I'm just trying to envision how I could use this to make my stuff work better. Yeah, well, I said there is not many features and there is no API but it's really, really small tool so it would be easy to add any of those at this point. Yeah, hi. How do you test, test more? Good question, it has a test suite but I haven't, there is a problem in calling coverage by recursively, right? So I can have a test suite but I cannot use the plug in itself which is a little bit of a bummer but yeah, there is also solution. I could on the parts which are, which rely on the coverage, the next feature would be to test more to add would be to manually specify methods and tests, to manually specify the dependency so that even if the tracking is lost in the recursion, it would still work. Does it track changes that you make in pyotest fixtures as well? So if you update a fixture, will it detect that as part of? Well if it's data files fixtures, if it's some JSON and like it was jungle fixtures, right, then no, that was one of the things I mentioned that it doesn't work when, you know, across, across technologies text and it doesn't work when you have data file inputs or some inputs from external services. So ideally you would have a test suite which constructs everything in Python code, also the fixtures. Then it's a Python code and it's tracked execution. Could you go back to the little example you gave with the subtraction addition stuff? Yeah. So what you did not show in the little screencast was what if the subtract methods gained a call to add? How would it deal with that? So you add a new dependency basically to one function, right? How would it know that? Would it know that after running the test once or when would it notice? If you add a call of add method to subtract method, right? The subtract method is called by two tests, right? So all of them would be, both of them would be executed. So it would be a test subtract and test both. Yeah, but test add wouldn't be and it's now a new dependency in subtract. It wouldn't be and it wouldn't gain the dependency also, right? So there's a new dependency because subtract now calls add and the... But test add doesn't call subtract. Okay, got it. Okay, my fault. Thanks. Also that's a good remark. Like we've been using the test on the... Well, now we're project and most of the bug reports have been like this, fortunately. So we found out that test one is right, which is quite a good surprise because, yeah, I also was afraid that it's going to be fragile and stuff. But the biggest problem actually is that tests are dependent, even if you don't know about it, there is some fixtures which doesn't get re-initialized or something like that. So then you get failures because test one always runs some other dynamic subset of your test suit which doesn't happen in any other circumstances. So that's like the biggest problem in adopting. Okay, next question. If my test calls a helper function... Sorry? If my test calls a helper function and the helper function changes, will it notice? Yeah. Because it's part of the tree? Yes. No, it's not part of the tree. It's part of the execution. Well, the syntax tree or the AST parsing is only used at the end of the whole analysis and it's not that important. The important thing here is that when you run the test and the test calls the helper function, it's in the list of executed lines and executed methods of that test. It's in the dependency matrix. It will appear as a dependency. So that probably answers Floris' question which was about pytest fixtures, not about data files. About text files? Pytest fixtures. In other words, it was a question about staying in the Python world, not a question about JSON fixtures. Yeah, that answers the question. Pytest fixtures are created by a method, right? So it's going to get tracked. How does it restart the test runner? Does it restart the entire process? Sorry, could you talk slowly? Because Testmon is implemented as a pytest plugin, right? So how does it restart the test run when it tries to run new tests? How does it affect session scoped fixtures? Do you restart the entire process? Or exactly how does that work? Well, I'm not sure about the session scoped fixtures. It really depends on the changes. Are you asking about the runtime itself? About the fixtures set up and tear down, basically. So after it's run a couple of tests, will it tear down all the fixtures and restart them again next test? Or does it? I'm not sure. This might be, well, not tested yet. I'm not sure how it would behave. So the session fixture is always run, if I understand correctly, always run just once in the beginning of the test execution, right? And if you change anything in there, that probably, it's not tracked because the specific test doesn't execute any of the session fixture, right? So that wouldn't get called, wouldn't get registered as a dependency. Do you understand correctly that if a function does something nasty, for example, change some global value and other function depends on it, on this global value? Because we kind of can break the function which we haven't edited and the tests will not run for the other function, for the old one. This is quite difficult to explain, but if that evil function changes some global value, it will appear in the dependency matrix. Whether it's called by a test or it's not, for every single test you can say that. And for the tests which do not call the function, it doesn't matter what the evil function do. The world can explode in that function, but the tests are not executed. The tests do not execute it. So for example, if my function depends on the state, the function A depends on the state and then I have function B that starts to modify this state and makes, in runtime, makes my original function break. How? Well, that's my way. Either this is a test dependency, right? Because the evil function gets called in test one and then the test two relies on that global value. So that's what I called test dependency and I talked about. That would be a problem. But if the tests are independent and the test two doesn't rely on that value and resets it again or doesn't use the value, it doesn't matter. It gets registered or it works as it should. Okay. That seems to be it. Thanks. Okay.
Tibor Arpas - Mashing up py.test, coverage.py and ast.py to take TDD to a new level Users and developers especially, hate waiting. Computing has adapted and we almost never wait for the computer for more then 10 seconds. One big exception is runnig a test suite which takes MINUTES on many projects. That is incredibly distracting, frustrating and dragging the whole concept of automated tests down. I present a technique and a tool (py.test plugin called "testmon") which automatically selects only tests affected by recent changes. Does it sound too good to be true? Python developers rightfully have a suspecting attitude towards any tool which tries to be too clever about their source code. Code completion and symbol searching doesn't need to be 100% reliable but messing with the test suite execution? I show that we can cut test suite execution time significantly but maintain it's reliability.
10.5446/20205 (DOI)
Yeah, well, thank you. Thanks for showing up. I wouldn't have thought that so many people are interested in logging. They're good. So, the agenda is pretty simple. First of all, try to explain why logging might be useful, because at least my impression that it's one of the most, let's say, underused modules. Then how do we make it work? So I'm just showing around a little bit of the source code and a little bit of our structure. And then some optional content, in case I run over, including the all-time favorite is logging slow. So, this used to be a Python notebook, and for scaling issues, I just made some screenshots and turned it into PDF, but you can get all the code and GitHub. And, well, let's start with the ugly part. If you don't use a login module and start with the most basic way to get out your message, you start with the most likely use print. You can use it for multiple things. So, looking at that, we have normal information, so this first one, debugging. You want to know what our program is doing, what values are in there at the moment. We might run into a situation where something goes wrong, we also want to report on that. We also do this at different levels, so we have this little division function that really does only a division. Then we call it from another function that just, well, iterates through a complete task. And now we have four calls that we might want to see at one point in time. Normally, if you were to write that, you would probably start, okay, that's interesting. I'll add a print statement, print it, later on we move the statement. If this program runs into a problem during a long run, you may add some functionality to write into a file. This is all good and well. It has some limitations. So it looks like this. You get all information out. But the only thing that you actually get when you look at it is all the things that you wrote yourself. That is all text that is in the print statements. And it's all handled the same. You can't really differentiate between the debug level, the error level, etc. So print has some limitations. We have to select ourselves what we want to log, how important it is, how we want to handle it. We have to write all the information we want to add to a message. It might be timing information, it might be information about function parameters, whatever you can think of. You have to do it yourself, which also means that you have a good chance to end up with messages who are slightly different. Now, it's not a problem if you read them, but it starts to be a problem if you want to use them in a parser, for example. So either the situation wants where we actually used logging on in Python, but from Java with different logging configurations and slightly strange logging configurations, that became a huge pain to actually parse those because they were also slightly different. And if you do it by hand, if you do different forms for dates, it will be worse. Finally, we only have limited control where our message does end up. So you can print to a file, you can print to the console, you can even write your own functionality to write to both. But once you start doing this, you basically implement your own little logging module and chances are that the logging that you actually need is already in the standard lab. So what's different with logging? We have more structure. Structure, in this case, is pretty good because it not only helps you to parse the thing, it also helps you recognize what's in your messages. So you get a nice state stamp, so all the same size, get your error information, you get the name of the logger, it's all pretty nice. And the logging module provides us with all the different infrastructure that we need to set this up. So logging is good. To notice a slightly more theoretical band, if you have a message that you want to output from your program, you might ask yourself, how important is this? Do I need this every time I run this program or do I really care about if something is going wrong or if I'm interested into debugging? Where does it come from? As your program goes, you'll have multiple modules, multiple libraries in there that may or may not produce interesting messages. And you want to control over that. You also want some information about context. When did this happen? You may even, if you do logging for the web and add session information and things like that, so that in your log, you can really follow a user along. What happened? That's the thing that you normally think first about when you write a logging message, that's what you're writing. I'm doing this or this function failed. And finally, how does it all look like? So normally, if you think about logging, you'll think about text string, but it's completely reasonable with just a few changes to maybe write it from JSON, which again makes parsing much simpler and moves you slowly into the direction of structured logging, where you can send your log messages to a database and do some pretty advanced querying, which will be extremely hard if you did it all by hand in text in the beginning. Finally, if we have our message, we want also to control where this message ends up. So do we send it to a file? If we send it to a file, do we want a rotating handler? Do we want in a multiprocess environment, maybe send it to a socket or database? Do we want to aggregate it? And this is all things that you would have to implement if you just go the print way, and which are logging module implements. The challenge there is it implements all of it, not always in the, let's say, most transparent way. So there are some, well, let's say, things that you should know about when you start with logging. But to keep things easy, this is just the same program we had previously, and now just with logging. Now, what you see there is that it does not really add much code to your program is even a pretty bad example, because what we have there is, well, no real business logic or programming at all. It's just a division. But even there, each log message is just like a statement, and you have some extremely easy configuration up there. So this logging basic config basically just tells the module that it should include debug messages in the output, and then we get a logger. It's two lines extra and four lines that are different, so no worries. And that links us with this kind of output. But you can say that even with this little changes, we get some bonus features. So we have a more standardized format. We have the log level in for debug, always at the beginning. We have the name of the logger. We crested one logger, which is why it's always root. And then we have our logger message. Also, we have the stack trace for the exception log. Going back. When we did this to sprintf, we just said that we log exception. We had a problem there. Which log exception, you not only generate an error message, so a log level error, but you also send out the stack trace. Makes parsing again somewhat harder if you do it in a file. You can change it as well. But you get this information essentially for free. And well, second code is just the second iteration. We got results. For this simple change, we got a lot of extra features. Making the things that we want out of the logging module to the actual infrastructure of the log module. So what we actually used for our logging messages, the log point, debug, log file info, is a logger. You request a logger from the logger module. Again, you have different functions that serve to indicate whether it's an interesting message, debug, or something that will bring your system to its knees critical. You can have different logger at different places in your program. You can request different ones. The loggers themselves have some configurations. They know to which handler they'll talk. They know their debug level. We go into that later. They send out their messages, log records, via a filter, a filter's optional, tune handler. Handler could be steam handler, writing to console, file handler, rotating file handler, sockets, queue for multiprocessing. I think there are 14 and counting in the module. It's quite likely that you find something that is useful to you. Maybe not perfect, but useful. Also we say handler is a formator. The formator turns the log message, sync addict, into the actual text that you found out. These are a couple of different objects. They are arranged in a tree, as we see later, but it's not very complicated to get them. If you call base config, what happens, let's say, under the hood, is that it just setups a very basic logging infrastructure for you. It gives you a logger, it gives you a stream handler, it gives you a standard formator, and then it puts out this information that we saw previously. Now, the most interesting thing is usually what log messages should we use. Debug, like the name suggests, is mostly for debugging. If you need extra information because you're looking into some problem, it's not something you would have enabled as you run your program. When I end up in your log files, usually space shouldn't be a problem, but it's maybe a little too chatted. Info is information like, I'm starting my program, I'm doing this, I'm not calling this function. Might be interesting, let's you know where you are in your program, but again, not as important. Warning is just that, something that at one point in time you should probably look into. When you have error, something went wrong, and finally critical, your program is about crash or needs to be terminated. So, this also really helps you to just structure what you're writing, to think about what information do I want to communicate here. And as you put these out, you see that you get the logging level plus the information, but logging exception is just a special case of error, where also the stack trace is added. And if you only keep, let's say, once you're out of this whole talk, but most likely business, of course, that helps you already to build your Stratelet log, and you can build it from there. So, the most basic thing is just get the log messages into the program, set up, or fine tuning can happen afterwards. Now we go slightly more complex. This is basic config. Basic config is just the easy way to configure the logging module, mostly for SQUIP. So if you want to do just one call to get your logging, it's basic config. What I do there in addition to the slide before, that first of all, I add some format specification. There are two of those, one is the date format. Basically the standard format is also used in the daytime module for parsing and printing to say what information you want to get out there. One thing to keep in mind is that milliseconds are not included. For some reason or other one is the format string. And the format string tells us on what information we want the logging module to include. The interesting thing is that the only part of this message that we supply is the last part message. Everything else is provided by the logger. So the logger will tell you what the time is, as specified by date format, plus a millisecond, the name on the level of the log. It will give you the name of the logger message. And as you see later, you'll get much more information if you want it, including the line in your source code, including the thread or process ID. So you can log almost everything about your program in there, and in a way that you'll find the course of the log entry later on. You will see that in addition to the log message, a pure text string, you add some information. The one-swee, one-pot-three there. This uses the old style string formatting. I think from sweet point to onwards, you can also use the new style string format. And that would be just a configuration option in the formator. For this talk, I just used a simple word. So slightly more complex, basic config now is a format string with a date string and lock level debug. If you ever start logging with debug and info and your messages don't show up, that's what you forgot. The default configuration has the default logger set to warning only, so debug and info messages get dropped. Now I don't particularly like basic config, because at the end you'll have to learn two different modes of set-up the log. It has this interesting feature that it is only called once. So once it is called, it's set up the logging system and if you call it again, it will not necessarily change it, which can be confusing. So my suggestion would be to go with this directly. It's slightly longer, but at least in my view, it's easier to understand. So you request a logger, then you set a level for the logger, then you get your handler, assign the logger to the handler, together with the formator, and you have your logging system set-up. It's essentially the same thing we did before. You can use this like I did it here, so just in your Python code, maybe just write a little module logging set-up that you import will usually work. If you go for slightly more complex situations, you will do something different. But just going back what we did here, so we got logger, we assigned a stream handler to the logger, and we assigned a default formator or formator to the stream handler. And the logger always has this log level on which it's enabled or not. In more detail, so the formator has its two format strings, one for the time, one for the message. The log info message has the, let's say, textual part, whatever you want to say in prose, and some parameters that get added to the string. Following from this configuration, you get your actual log message. These are all the things that would be available in the log record to add to your message. I've actually taken out some, because there are quite a lot of them. Some of them are quite surprising, like the function name and the line number. By default, they are in there. You can disable them for performance reason if you want to. It doesn't make much of a difference. So just take away, in your logging, you can really point to the Python code where the log message originated from, which may not be useful on the same days that you write but may be useful when you see a log message, say, two weeks, two months, two years after you wrote the code. This is the final way to set it up, DIC config. There's also a configuration file config, but I like this more. It is not actually easy to read, but it is quite powerful. So what you should take away, the yellow parts are just the different objects that we've seen previously. So we have a logger down there, the handler and the formator, all of which of them reference each other. So the logger knows that it has one handler named console, and the handler knows that it's one formator named long format. All the other parameters are sent in there. Nice thing about this is that it's quite easy to add information. If you have your login configuration in a file outside your program code, it's quite easy to reload the whole system. You can change your login configuration from outside. You can add, and there's where things get slightly more interesting, different handlers to your loggers. So what you do here is just look at the same config, then add a second handler and then lock one message in the end, and as you can see, there's this cat log file. Now my message ends up in my console and also in my file handler. And, well, if you want to, you can add as many handlers as you want to. So this is all I'm showing you now. So we have different file handlers and a stream handler. The thing to keep in mind on the log level is that you can have different handlers with different levels. So if you say, I only want errors in my files, but everything in my console, you can do that. Like I said, so if we just set the file handler to only print the warnings and ignore the debugger info, it will not show up in your file. This is, I do have it quite easy. So you load your basic config, modify it a little bit and get more information. You can also, this is where things get interesting in terms of structure at ChiveLoggers. So we just request other loggers, normally with, well, some name or name.dot, which maps quite well to the name of modules. And what you create is some logging tree. So you have a tree of logging objects and you would normally configure this in a way that you attach the file handlers and everything else to the root logger. Then add the ChiveLoggers below that. Then you have some switches to which to configure where log messages are. So normally, start the root logger, configure it and then add one extra logger per module. And best practice is to add name for this. So go a little bit faster because there's one thing that I'd like to show you. You cannot filter. Filters are a little bit of dark magic. So it's just a function that you call with your log message, which then decides whether it gets passed on or not. You can also modify the log record. So if you want to add extra information, you can do that. And for that, I just refer you to the iPad notebook. The interesting thing and the most likely reason is things are fail as this workflow. So once you have your logging tree built, the, let's say, way that a logging message is passed up the tree is not really intuitive. So you have a logging call. If it is not enabled at the logger of origin, it gets rejected. If not, a record is created and the local filters are applied. Now, if there is a handler at the current level, this handler will be called. If not, it will go to the parent. At the parent, it will not work with the filters and it will not work with the level. So at this moment, the only way to get the message out is the handler, which is also quite nice because you know that the handlers are responsible for filtering the methods. And then it gets emitted. It's actually standard, the standard documentation from the module. I just made a little bit more colorful. But I think if you ever run into any problem with logging, this is the most likely cause because there are quite a lot of things that you can tweak or not tweak. Okay. So that's just some basic code for filtering. As you can see from sweet point, something you can add callables. Note that it was just an object that had a filter method. So for the 2.7 users, it's slightly inconvenient, but not much more complicated. You can do a lot of extra things. You can get the dictionary of the log record and add information, which is actually something that happens with the logging modules. It's not as ugly as it seems. The log record that's created there looks about like this. So you can see that I Python string there. So it's created out from the I Python logging hierarchy. So tons of information that you might find useful or not depends. But just, yeah, behind the lines. So if you run into problems to do that, so see what my logging tree like, I want to recommend just one module logging tree, it's somewhere on GitHub just group it is also on pet. It prints out the whole logging tree. This is about page one of let's guess five that gets created when you call the or visualize a logging tree from an I Python notebook because I price itself as quite a lot of logging modules as a little exercise for yourself. I just recommend open a Python console, import requests and print a logging tree. So we question all other modules will also add their own logs, which are by default not enabled. So they have a not set handler and logging never not set, but they're there and you can reconfigure them and use them if you want to. Okay. Thank you, seven. We are unfortunately out of time for questions, but I'm sure you'll be answering them on the whole ways. Sure. So mine was actually just a quick video from the about looking
Stefan Baerisch - A Deep Look at Logging Do you know what your application did last night? Python logging can help you. This talk you will show you how to implement a systematic logging approach without boilerplate code and how to set up the Python logging module for different needs in production systems. We will see how to work with log files and other logging endpoints. We will address the data protection concerns that come up when logging from application with personal information. We will also look at the performance implications of logging. We will then cover best practices - how to structure logging, what to include in a log message, and how to configure logging for different use cases. We will use the Python standard logging module to implement logging. This talk is useful to beginners with some experience. An understanding of decorators is useful, but not required. Some experience in web programming is a plus.
10.5446/20204 (DOI)
Okay, hi everyone. Thanks for coming. I'm very sorry about the technical difficulties. We clearly should have had a bit more time to set up and prepare. And I really don't please try nothing to look ahead too far in the slides. I know it's going to be difficult. But there you go. Okay. So I'm going to talk about web scraping best practices. I originally called this advanced web scraping because we're going to touch on a lot of advanced topics. But it's not advanced in the sense that you need to be past the beginner level or anything to understand it. So I changed it to best practices. I hope that everybody can follow this talk and understand what's going on. If you can't, please just shout or let me know. So a bit about me. Let's see. Eight years ago about that I started scraping kind of an anger. And that was around the time when we did the scraping web or web scraping framework. And since that time we've been involved in a couple of other projects including Portia and Frontera. If you don't know what they are, don't worry. I'll get to them later. So why would you want to scrape? Well, lots of good sources of data on the internet. And actually we come across a lot of companies and universities and research labs of all different sizes who are using web scraping. But you know, getting data from the web is difficult. You can't rely on APIs. You can't rely on semantic markup. So that's where web scraping can come in. These are some stats. You probably can't read them very well because it's small. But basically web scraping has been on the increase recently. We've seen that ourselves but this has been also something we've seen from other companies reporting. These stats are from a company called Incapsula that provide anti-bot scraping technology. And it's a sample of their customers. So it's probably not completely representative of the internet as a whole. But still it's very interesting to see. And another thing that I can see from this as well is that smaller websites have a larger percentage of bot traffic. Probably because they have less users but it's something to keep in mind. Especially if you write bad bots. They cause more trouble for smaller websites. Smaller websites might have bandwidth limits, for example. And many HTTP libraries, they don't compress content. So you easily go over and they're bandwidth limits. Also, of course, doing a bad job means your web scraper is very hard to maintain. This is a notorious problem, of course, because websites change. So when I think about web scraping, I like to think of it as in two parts. The first is actually getting the content. So it's finding good sources of content and downloading it. And then the second is the extraction. Actually extracting structured data from that downloaded content. And I've kind of structured this talk in two parts as well that follows this. So, and as an example of web scraping, I just said that scraping help gets scraped all the time. And it's not just people testing out Scrapy or something like that or our tools. But actually, a couple of weeks ago, we posted a job on our website. And the next day, it was up on a job listing web support. And none of us posted it there. So we thought, well, how did that happen? And I think we were probably scraped. So a question for the audience would be to think about how would you write that scraper? I would break it down into, okay, how do I find good sources of content? And how do I extract that data? It turns out that we tweeted about the job. So hashtag remote working. So maybe somebody picked it up from Twitter, got retweeted. That would be an easy source of content. And we did use semantic markup. So perhaps they extracted it from that. And that's relatively, to write such a scraper that could do this is a relatively easy task. You could do it in a day maybe. But then if you wanted to do, say, to handle cases where people didn't use semantic markup, or you wanted to find people who didn't post to tweet about it or post it to some other website, then it becomes a much bigger and much more complex task. And I think that kind of highlights the scope of web scraping from the kind of very easy pool of fun hacks that don't take very long to the very ambitious and very difficult projects that happen. So getting on, moving on to downloading. Yeah, I'm going to mention the Python requests library. Probably many people know it. It's a great library for HTTP. And doing simple things is simple, as it should be. But when you start scraping at a little bit more scale, you really want to worry a bit more about a few other things. Like, for example, retrying requests that fail. Certainly when we started out, you know, you'd run a web scrape and it might take days to finish. And then about three quarters of the way through, you get a network error, or you get, you know, the website itself that you're scraping, but suddenly return 500 internal server error for 10 minutes. So if you don't have some policy to handle this, it's a huge pain in the ass. So, yeah, you want to think about that. I also, in this example, you can see I'm using a session. Well, I don't know if you can see it or not, because it's small. But consider using sessions with Python requests. Use handle cookies. They also use connection keep alive. So you don't end up repeatedly opening and closing connections to the sites you scrape. But I would say as soon as you start crawling, you really want to think about using Scrappy right away. This little example here is not much code. It uses Scrappy's crawl spider, which is a common pattern for scraping, for crawling. You know, just defining one rule, a start URL, and that's enough to go from the RUR Python website for this conference to actually follow all the links to speakers, and you just need to fill in some coded parts of the speaker details. So it's really not much code. And it solves all the problems, like highlighting, or solves all the problems, like retrying, et cetera. You can cache the data locally, which is good if you're going to live demo stuff. Yeah. So a single crawl like that often turns into crawling multiple websites. At PyCon US in 2014, we did a demo, and it's up on Scraping Hub's GitHub account. It's called PyCon Speakers, where we actually scraped data from a whole lot of tech conferences. This is a really good example to look at, because it shows you can, it shows a way to manage and how Scrappy Project looks when you've got a lot of spiders. And Scrappy provides a lot of facilities for managing that, like you can list all the spiders that are there. A spider is a bit of logic that we write for a given website. And it also shows best practices in terms of, you know, it's easy with Scrappy to put common logic in common places and share it across multiple websites. When they're crawling the same type of thing, there's a lot of scope for code reuse. So definitely for Scraping multiple websites. Yeah, Scrappy's no-brainer. So some tips for crawling. Find good sources of things. Some people maybe might not think about using sitemaps. Scrappy actually has a sitemap spider that makes this very easy and transparent. But often, you know, it can be a much more efficient way to get to content. And that also means, of course, don't follow unnecessary links. Yeah, this is, you can waste an awful lot of resources for everybody following stuff that doesn't need to be followed. Consider crawl order. Yeah, so if you're discovering links on a website, it might make sense to crawl, to crawl breadth first and limit the depth you go to. This can help you avoid crawler traps where maybe you're, I don't know, repeatedly scraping a calendar, for example, and just going through the dates is a common example. I used to work in a company before that had a search engine. And crawlers every now and then would enter, follow some link into it and would follow all the search facets and turn every permutation and combination of search. And this generated huge load, of course. So as you decide to scale up, so I was talking here about maybe single-website scrapes, which is, I guess, the most common use case, at least, especially for scraping. And single-website scrapes can be big, right? I mean, we frequently do maybe hundreds of millions of pages. But at scale, say, for example, you're writing a vertical search or a focused crawler, then we're talking maybe tens of billions or even hundreds of billions of discovered URLs. So you might crawl a certain set of pages, but the amount of URLs you discover on those pages, so your entire state that you need to keep in your URL frontier is what can be much, much larger. So maintaining all of that is a bit of a headache. It's a lot of data. And one common way to do it is people just write all that data somewhere and then perform the big batch computation to maybe figure out the next set of unique URLs to crawl, typically using hadoob or mapperjoo. It's a very common thing. Maybe not just a good example of that. And then incremental crawling would be where you are, continuous crawling, actually, would be where you're continuously feeding URLs to your crawlers. This has the advantage that you can respond much more quickly to changes. You don't need to stop the crawl and resume it. But also, nowadays, maybe you want to repeatedly hit some websites. Maybe you're following social media or something like that or good sources of links. So it's much more useful, but it's much more complex at the same time, and it's a harder problem to solve. Maintain politeness is the little point on the bottom, but it's something really you want to consider when you're doing it on any scale. I think almost anybody can fire up a lot of instances nowadays on EC2 or your favorite cloud platform. And just download loads of links, download loads of pages really quickly without putting much thought into what those pages are. Particularly, the impact it's going to have on the websites you're crawling. In a larger crawl where you're crawling from multiple servers, you would typically only crawl a single website from a single server, and that server could then maintain politeness. So you can ensure whatever your crawling policies are, you don't break it. So, Frontera, I thought I'd briefly mention it. Alexander Sibirikov gave a talk on it yesterday. This is a Python project that we worked on, or we're working on, that implements this crawl frontier. So it maintains all this data about visiting URLs and tells you what, you should crawl next. There's a few different configurable backends to it. So you can use it embedded in your Scrapy crawl, or you can just use it via an API with your own thing. And it implements some more sophisticated revisit policies. So if you, say, want to go back to some pages more often than others, and maybe keep content fresh, it can do that. And I think Alexander particularly talked about doing it at scale. So here's a crawl of the Spanish Internet. And he's going to be talking about that in the poster session as well. So please come visit. So just to summarize quickly what we talked about downloading. Request is an awesome library for simple cases. But once you start crawling, it's better to move to Scrapy quickly. Maybe you wouldn't even want to start there. And if you need to do anything really complicated or sophisticated or at scale, consider Frontera. So moving on to extraction. Extraction is the second part that I wanted to talk about. Of course, Python is a great language for extracting content or for messing with strings or messing with data. There's probably a lot of talks at this conference about managing data with Python. But even just the simple, you know, built-in features to the language and the standard library make it very easy to play with text content. Regular expressions, of course, is one thing that's built into the library and probably, yeah, I should mention something about it. Regular expressions are brilliant for textual content. Yeah, it works great with things like telephone numbers or post codes. But if you find yourself ever matching against HTML tags or HTML content, you've probably made a mistake and there's probably going to be a better way to do it. I see this code all the time of regular expressions and, yeah, it works fine, but it's hard to understand and modify. And often it actually doesn't work fine. So other techniques, well, use HTML parsers. So we have some great options. Yeah, so if you want, this is when you want to extract based on the content, based on the structure of HTML pages. So often you will say, okay, this area here surrounded by this, underneath that table is HTML parsers, absolutely the way to go. Yeah, so just a brief example. Oh, yeah, on the right-hand side, I just had some examples of HTML parsers. LXML, HTML 5.0, beautiful soup, gumbo, and of course, Python has its own built-in HTML parser. I'll talk about them a bit more in a minute, so don't worry if you can't see that. So just as a brief example of what they do is take some raw HTML here that looks like text and create a parse tree. My favorite way of dealing, and then use some technique. Usually these parsers provide some method to navigate this parse tree and extract the bits you're interested in. I don't know if you can see that, so I'll skip this quickly, but I quite like XPath as a way to do this. It's very powerful. You can, in this case, just select all bold tags or bold tag under div, or the text from the second div tag. It lets you specify rules. It's really worth learning if you're going to be doing a lot of this. Yeah, here's an example from Scrapy. You don't really need to read that, but basically it just lets you, Scrapy provides a nice way for you to call XPath, RCSS selectors, on responses. Yeah, so this is probably definitely the most common way to scrape content from a small set of known websites. I definitely want to mention beautiful soup as well. This is a very popular Python library. Maybe in the early days it was a bit slow, but the more recent version you can use different parser backends. So you can even use beautiful soup on top of that XML. The main difference with the example I showed previously is that beautiful soup is a pure Python API, so you can navigate content using Python constructs and Python objects versus XPath expressions. The other thing is, of course, you might not need to do this at all. Maybe somebody has already written something to extract what you're looking for. So, yeah, definitely, maybe there's stuff you wouldn't even think of. Some examples of things that we've done is we wrote a log-in form module for Scrapy that automatically fills in forms and logs into websites. We have a date parser module that takes textual strings and can build a data object from it. And webpagers, another project that we wrote which looks at an HTML page and will pull out links that perform pagination, which is often useful. I was going to live them with this, but I think we're probably short on time. Maybe it's not worth tempting face. We had enough technical problems already. But Portia is a visual way to build web scrapers. It's applicable in many of the cases where I had previously mentioned where we would use XPath or beautiful soup. But it's advantages. It's got a nice UI where you can visually say, oh, I want to select this element. This is the title. This is the image. This is the text. I was going to demo this about scraping the EuroPython website. Maybe if somebody wants to drop by our booth later, I can do it. I can show you. But it's really good. It can save you a lot of time. However, it's not as applicable. You know, if you really want to be, if you have some kind of complex rules, complex extraction logic, it might not always work with this. And of course, if you want to use any of the previously mentioned stuff like automatically extracting dates and things, they might not be built into Portia yet. So scaling up extraction, Portia is great. It's much quicker to write extracting extraction for websites. But at some point, it becomes pointless again. You might be scraping 20 websites. That's fine. 100 people have used it to scrape thousands. But what about tens of thousands or maybe even hundreds of thousands? At this point, you want to look for different techniques. There are some libraries that can extract articles from many pages. They're easy to use. I want to focus on, quickly, on a library called WebStruct that we worked on that helps with automatically extracting data from HTML pages. And the example I'm going to use is named entity recognition. So in this case, we want to find elements in the text and assign them into categories. So we start with annotating web pages. So of the type of stuff, the type of, we label web pages basically with what we want to extract as examples. We're going to use a tool called WebAnnotator. But there are others. Here's an example of labeling. In this case, we want to find organization names. So the OT cafe is an organization. And we would label it within a sentence, within a page. That format is not so useful for machine learning and for the kind of tools we want to use. So we would, of course, that text is split into tokens. Each token in this case is a word. And we label every single token in the whole page as being either outside of what we're looking for, as being, or as being at the beginning of an organization, or inside an organization. And given that encoding, then we can apply more standard machine learning algorithms. Yeah, in our case, we found conditional random fields as a good way to go about it. But an important point is that it needs to take into account the sequencing of tokens, or the sequencing of information. Some features, so we feed it basically not just the tokens itself, but actual features. And the features can be things like about the token itself. But they can also take into account the surrounding context. And this is a very important point. We can take into account the surrounding text or even HTML elements that it's embedded in. So it can be quite powerful. So one way to do it, and this is what we've been doing recently, is to use our WebStruct project. And this helps load the annotations that were done previously in WebAnnotator. Call back to your Python modules that you write yourself to do the feature extraction. And then it interfaces with something like Python CRF Suite to actually perform the extraction. So this is just briefly to summarize. We use slightly different technologies depending on the scale of extraction. HTML parsing and Porsche are very good for a single page or single website. Or for multiple websites if we don't have too many. The machine learning approaches are very good if we have a lot of data. We compromise a bit on maybe the accuracy. But that's the nature. I just wanted to briefly mention a sample of a project we've done recently. Actually, we're still working on it. You might know the Sachi Art Gallery. It's a gallery of contemporary art in London. We did a project with them to create content for their global gallery guide. Now, this is an ambitious project to showcase artworks and artists and exhibitions from around the world. So it's a fun project and it's nice to look at artworks all day. So of course we use scraping for the crawling. We deployed it to Scrapy Cloud, which is a scraping hub service for running Scrapy crawls. And we used WebPager, one of the tools I mentioned earlier, to actually paginate. So the crawl, we prioritized the links to follow. So using machine learning so we don't try and waste too many resources on each website with Scrapy. Once we hit the target web pages, we then use WebPager to paginate. So that's the crawling side. On the extraction side, we use WebStruct very much like I previously described. One interesting thing that came up, I thought, was that when we were extracting images for art for artists, we often got them wrong. And we had to use a classification to... We actually classified them based on the image content using face recognition to see which one were artists versus artworks. So it's working pretty well. This is in scraping 11,000 websites, hopefully, to continue and increase. So one important thing, of course, is to measure accuracy, to test everything, improve incrementally. And it's also good to not treat these things too much like a black box, try and understand what's going on, don't make random changes. It tends to not work so well. So briefly, we've covered downloading content, we've covered extracting. It seems like we have everything to go and scrape at large scale, but there's still plenty of problems. And I'm just going to touch on a few in the last five minutes. Of course, web pages have an irregular structure. This can break your crawl pretty badly. It happens all the time. From people using... Superficially, some websites look like they're structured, but it turns out somebody was using a template and a word processor or something, and there's just loads of variations that kill you. Other times, I don't know, maybe the developers have too much time in their hands and they write a million different kinds of templates. You can discover halfway through that the website's doing multivariate testing and it looks different the next time you run your crawl. I wish there was a silver bullet or some solution I could offer you for these, but there's not. Another problem that will come up is sites requiring JavaScript are browser rendering. We tend to have... We have a service called Splash, which is a scriptable browser that presents an HTTP API. So this is very useful to integrate with Scrapy and some other services. You can write your scrapers in Python and just have the browser... Have Splash take care of the browser rendering, and we can script extensions based in Lua. Selenium is another project. If you start thinking like, okay, follow this link, type this here, Selenium is a great way to go. Oh, yeah. Finally, of course, you can look at Web Inspector or something to see what's happening. This is maybe the most common thing for Scrapy programmers because it's quite efficient. You can just... Often there's an API behind the scenes that you can actually use instead. Proxy management is another thing that you might want to consider because some websites will give you different content depending on where you are. We crawled one website that actually did currency conversion, so I thought I was being very clever by selecting the currency at the start, but it turns out the website did a double conversion and some products were like a center too different, so didn't discover that one for ages. They ban hosting centers often where they've had one or two abusive bots. It could be somebody else. This is just part of the nature of scraping in the cloud. For reliability, sometimes for speed, you might want to consider proxies. Please don't use open proxies. They sometimes modify content. It's just not a good idea. Tor, I generally don't like it for large content scraping. It's not really what it's intended for, but we've done some things with maybe government agencies or security in the security area where we really don't want any blowback from the scraping, and it really needs to be anonymous. Otherwise, there are plenty of private providers, but very in quality. Finally, last slide is just briefly want to mention about ethics of web scraping. I think the most important question to ask yourself is what harm is your web scraping doing? Either on a technical side or with the content that you scrape? Are you misusing it? Are you hurting the sites you're getting it from? So on the technical side, crawl at a reasonable rate, and it's best practice to identify yourself with your user agent and to respect robots.txt, especially on broad crawls. That's when you visit lots of websites. That's it. We have some questions. Thank you. Thanks. Wonderful talk. One question. Imagine you have to login into some website, and if you use a tool that will generate some fake credentials and stuff, for example, you have a profile of a programmer or a farmer or a rock star and so on. Thanks. Okay. So about logging into websites, well, the tool I mentioned just finds the login box and lets you configure your user ID that you want to use. So it doesn't handle managing multiple contacts. I have seen people do that, but it's not something I've done myself. Yeah, so sorry. That's all I can say about it, really. Any other questions? Hi. First of all, thanks for the scrappy library. I mean, it's an awesome thing, and we are using it on a daily basis. That's great to hear. Actually, these guys, you should be thanking in the audience. Yeah, thanks guys. I may have gotten a ball rolling, but stand up guys. Stand up. But these are the contributors really here. I think there's more of them up there, but I don't know why they're being shy. Sorry, go ahead. I probably have a few questions, but I'll only ask a couple, I guess. First, I'd like to mention PyQuery. That was an awesome development change for us from XPath. Can you maybe try that? This is one thing we use regularly, and it proves. Yeah, I've heard of it, but I haven't really paid it properly, so yeah, we'll check it out. I think there might be scope for including other approaches to extraction. Okay. Thanks. One is, did you maybe think about master spiders or spiders that can, you said that APIs are brittle, but you could still think of web frameworks and some behave in similar ways, and maybe you could get away to extract certain information from certain kinds of websites? Yeah, absolutely. We have a collection of spiders for all the forum engines, for example. It's not individual websites, but it's the underlying engine powering it, and that works really well. Yeah, we're building collections of those kind of things. My example about APIs, I didn't really meant to dis-API in general. They're often quite useful, but some cases, they don't have the content you're after, and in some cases, the content is maybe lags behind or it's a bit less fragile than what's on the website. That's been my experience, but definitely if there is a web API available, you should check it out. It works fine, it's creepy too. Okay, and just last question, a little bit more technical. Do you have plans for anything to, I don't know, to handle throttling or to handle robots.txt or to reschedule 500 errors or something like that? I know there's an auto throttler plugin, but that does, I mean, it slows you down significantly on a good website, though it does work for slow websites. Thanks. You're welcome. Yeah, throttling is an interesting one, and often internally what we do is we deploy with auto throttling by default and then override it when we know the website can do better or differently. So it is a case, especially when you're calling a single website or a small set of websites, it's worth tuning that yourself. It's hard to find good heuristics, and definitely it's something we do all the time when we write individual scrapers. I'd be interested in your thoughts about how we could come up with some better heuristics by default. It's definitely a very interesting topic. Yeah, and retrying, again, Scrapey does retry stuff by default, but you can configure, for example, the HTP error codes that signify an error that you want to be retried because they're not always consistent across websites. Thank you. Hi, so a slight follow up to the retry thing. You mentioned this briefly under the talk. Do you actually like do things like backoffs and jitters and stuff because from my job we have very interesting situations with synchronized clients and other fun that, yeah, it's good to avoid. Yeah, yeah, yeah, definitely. And actually, I glossed over a lot of details. I mean, I said we run in Scrapey Cloud, but that takes care of a lot of the kind of infrastructure that we typically need. And Alexander gave a talk on the crawl frontier, which is crawling at scale. And there's a lot more that goes into that, that it happens outside of Scrapey itself. The first thing, of course, that we noticed as soon as we started crawling from EC2 is DNS errors all over the place. But there are several technical hurdles that you need to overcome, I think, to do a larger crawl at any scale. Okay. Thank you, Shane. Thanks very much. Thanks everyone. Thank you.
Shane Evans - Web Scraping Best Practises Python is a fantastic language for writing web scrapers. There is a large ecosystem of useful projects and a great developer community. However, it can be confusing once you go beyond the simpler scrapers typically covered in tutorials. In this talk, we will explore some common real-world scraping tasks. You will learn best practises and get a deeper understanding of what tools and techniques can be used and how to deal with the most challenging of web scraping projects! We will cover crawling and extracting data at different scales - from small websites to large focussed crawls. This will include an overview of automated extraction techniques. We'll touch on common difficulties like rendering pages in browsers, proxy management, and crawl architecture.
10.5446/20203 (DOI)
Okay, so parallelism shootout, threats, multiple processes, async IO. Just to set the expectation before I start, this is not a deep dive into any of these. So definitely not an advanced talk, probably intermediate, maybe even inner-to-intermediate, depending on how much you know about them. So my name is Sharyar, I'm a software engineer in London, working at BOSTA and I don't have a presentation because it quit unexpectedly. Today we're going to talk about parallelism and the point of it is to take one problem, it will come on the slide eventually, and try and approach solving it using different techniques, threading, multi-processing, or async IO and just get a feel for how each of them work, that's me. We want to take this problem that we have, so we have, let's say, lots of URLs in the file and we want to download their contents and store it on our machine. And the point to that is to use the threading, multi-processing, and async IO libraries or modules separately and then firstly get a feel for the mechanics of how they work and secondly to be able to do a simple benchmark. Now benchmarks make me nervous, especially for parallelism, so don't take it too seriously, it's just to give you an idea or an me an idea of how they compare to each other. So before we start, I'm just going to break down the problem into three main bits. So the first is that we're going to read the URLs from a file. The second part is to download it from the internet and the third one is to store it on our machine. But before we begin, just a quick reminder, who is familiar with IO bounds and CPU bounds, types of computation. Excellent. Just a quick recap, CPU bounds are basically computations that are hungry for CPU, so if you give them more CPU or faster CPU, they perform faster and they end quicker. And IO bound computations are ones that, the time it takes for them to complete depends on the rate and frayo, so you can give them a really fast CPU but it won't make a difference because it's blocking on IO. So to go back to our three original start problems, reading URLs from a file, IO bound, because this access, we're reading it, downloading content, IO bound, HTTP request, again we have to block and wait. And storing the content on our machine again, we're writing to disk, so that's IO bound too. And just as a random thing, generally a lot of things we do are IO bound for a, we do a definition of generally but usually day to day tasks are IO bound. Before we even paralyze though, I think it would be good to just quickly go through the sequential approach and I think that would be a good baseline to compare how much actually paralyzing it improves it and how different methods have different improvements. So a bit of a mouthful, I've put the whole thing on there because you can actually run this and it works. Interesting, the highlighted bit for the sequential approach, we go over the URLs, those functions are just for convenience so that I don't have to write a lot of things again but they do what they say they do. So we go over the URLs, we get the contents and we put it on machine but we do this sequentially. So we do one, we do the next one. And when we think about tasks, when I think about tasks and if I want to make them faster, I would have to think about, so how does this look on my CPU over time? So the way this looks is that it's only running on one of the cores, let's say we have two cores. As far as I'm concerned, second core is Skyving because it's doing its own stuff but as far as my task is concerned, it's not doing anything. So I'm doing a bit of work getting the URL, downloading it, storing it on machine and doing the next one, just continues like that. But to even be more accurate, what's actually happening is that we're doing a tiny fraction of CPU work, then we're doing nothing, which is the dotted lines, because we're blocking for IO. CPU is not doing anything and then we do a tiny amount more CPU work. But the reality is this is not actually to scale. So if I was to show it to scale, the bit where the CPU for this particular task is actually engaged is very, very small. So this is a proper IO bound task. And just to show how this works over multiple URLs. So we have one URL takes a tiny amount of time, 30 URLs takes a lot more time. By the way, in the beginning I said the problem statement is that we have lots and lots of URLs. I've just used 30 in this case because it was much easier to run it multiple times. But imagine this over a gazillion URLs. It's not going to happen with sequential approach. But it's good to, you know, it just predicts it goes up linearly. So threading is the first method we're going to use. Threads in Python are actual real threads. Also, there's no controversy in this. I'm not going to talk about the global interpreter lock or fix it or it's just there. That's not going to happen. But just so we know threads are actual p threads or window threads or whatever. They're real threads, right? And quick recap on how to make them. Is everyone familiar with how to use threads? Okay, fair enough. So quickly we can make them two ways, either soft class threading a thread and override the run method or just have a function and use the normal thread class pass as a target and let it do the work. To run the threads, it's just called the start method, not the run method. Call the start method and it goes and does its stuff. And it stops when your actual function, so in the left case, the run method and in the right one, the actual do work function, the thread stops when that function has reached the end. But what if that function never reaches the end? What if we have a while true in it? So we wanted to do constantly work. Then we have the minute, the minute threads. So we pass the dmin equals true to the constructor and that tells it that you will stop whenever the main thread stops. So when the main thread stops, that will stop. Otherwise, the interpreter will lock if we don't because the main thread stops, but everyone else is still running. It's confusing. The threading code, again, this is the full code. So I don't usually like putting lots of codes, but this is it. So I thought it would be cool to go through it one by one. First and foremost, we add URLs to queue. I didn't mention the queue. We need the queue so that different threads can talk to each other. Not talk to each other, actually. Different threads can use something to get what they want to do next, right? Again, Python just gives this to you. Most of you probably know this. It's let's say if we don't have to worry about it. You just create it at the top, unvisited URLs. First thing I do, I go and add the URLs to the queue so that our threads can then consume from it. You get an interesting case if you do that in separate thread too. You don't want to do that because your queue might never get full and threads might read from it and then they think it's empty and your program ends, but it's not actually empty. So then we go, we have a number of workers. Let's end. We go through them. We each of them create one thread. Give it the target function which is visit URLs. Basically, it tells us what the sequential version did. It literally does that except it gets the URL and it marks this task done which is what you do on a queue to say the task is done. So we create the worker threads and we start them, right? And that does the actual work. That's it. And go back to how my CPU and my time is looking. This would look something like that. This is not accurate, but if we had three threads, then you would have three threads. And once one of them has done what's in the end is waiting, well, we can go to the next one, but it's being very vague here, you know, the OS or someone decides it's going to move on. And so in the same amount of time, we make better use of our resources. We do a lot more work, right? And the yellow thing there is the global interpreter lock which we shall not talk about any more after this. But that's just to say, if the lock just makes sure there's only one thread being run on a core at a particular time, right? So again, our second core is Skylink. It's doing nothing. And to look at the speed and the performance of this approach, this is how it works. The x-axis we have a number of threads. So if we have one thread, it takes ages. It should probably even take more than this sequential version because there's a bit of overhead. By one thread, I mean not the main thread. I mean one extra thread created after it. But we'll see that as we create more and more threads, this goes down. However, it does flat out after, I don't know, in this case, maybe between 11 to 17 threads. You're not really getting any more advantage. And that makes sense because by the time that 17th thread comes up, there might not even be any things left for it to do. But this is only for 30 URLs. If we had a gazillion URLs, then that would flatten out a bit later. And so, okay, this is good. We have reduced the time. Probably, I think, sequential. It took about 30 or so seconds. We've gone down to, what, at a good case, about five seconds. So that's okay. But we want to try multiprocessing now. See how that would perform. So, we're going to try to do a multiprocessing. Again, I assume most people are familiar. Hands up. Yay. With multiprocessing, this one process is the actual processes, right, so they can just run on a separate course. And the cool part about it is that the API is very, very similar to threading, as in very similar. So it sidesteps the interpreter lock. Oh, I said I won't mention that again. This is, I promised it last time. And it's really easy to change our threading example to be a multiprocessing example. And to do that, this is the exact threading code. It's only the highlighted lines are changed, right. So instead of getting Q from threading, we get it from the multiprocessing module. And instead of a thread, we create a process. That's it. Everything else is the same. This is beautiful, right. So I just changed that in five seconds. But the multiprocessing also gives something else, among many other things. That's something that I'm going to talk about here, which is the pool object. And the pool object is a way to paralyze the execution of a function over a number of arguments, right. So what this allows us to do is to, instead of changing our threading kind of code to use multiprocessing, we can even change our sequential code to use multiprocessing. And again, this is the sequential code as I showed you on the first slide. All you change is that you read the URL in advance and you create a pool. And that pool will have a map method and what map does it take a function and a list of arguments, but not a list of arguments with one whole that function. Every time it calls the function, it gives one of those items in the list to it and it says do your thing. And you can give it a number of worker processes that you expect. So to go back to the time and CPU kind of usage, if we had two processes, this is hopefully how it would look, assuming that they would actually get scheduled to run on a separate course. But the idea is that multiprocessing should allow you to sidestep the queue. And be able to run it properly in parallel, so true parallelism, hopefully. Yeah. And if we had more and more processes, then this is not accurate, but this is how it would look. It's like having two of those threading things. It looks the same as the other graph. But again, this is not exactly accurate because it's processes, but you get much more work done and you have lots of more cores to annoy. And the performance, very similar to threading in terms of the way it reduces your number of tasks. But first and foremost, for just one single, if you have one single process, it takes longer than both sequential and threading because the overhead is a lot to create a process. The thread is a bit less than sequential compared to this. It's nothing. But again, you get a healthy drop. This was again used for 30 URLs. So after a certain point, you know, this dimension returns, it's not really doing much. So that's cool too. But async.io, right? I think async.io is to Python as big data is to middle management. I don't know. I think it's a new module in Python 3.4. And it gives you the infrastructure for writing single-threaded code concurrently. It is meant to be quite low level. And the point is that you can use other stuff like tornado and twist it on top of it. I don't in this presentation. But it is quite low level and it's fairly compatible with everything else. Well, except, you know, it's Python 3.4, mainly. And I'm just going to, is anyone familiar with async.io? Cool. So I'm going to go through, async.io has a lot of concepts. I'm going to go through like two of them just because they're, I think they're like the most important ones. And also those are the ones that I'll be using in the code later on. One of them is that we have coroutines. And coroutines are basically functions that can pose as a curian in the middle of what they're doing. Return, you know, something else does its work. And then you can go back to that function and carry on. So this should immediately remind you of yield, basically. It's like a generator, right? Because it keeps its state. You do something, it yields. You do something else. But then if you go back to it, then it continues from where it was and it keeps its state. These are what coroutines are. And the way they are used is that if you have three separate functions and you want to run them in a row, you run one, then you run the next one, then you run the third one. Whereas with coroutines, you can say, okay, I'm going to run the first one until it needs to block. When it needs to block, well, you can yield. I can do my own stuff. I can run the second function. And then it does, does it the same way? I mean, I think this demonstrated well where it does it in a row of three separate functions. But if you take the case of blue, you know, it suspends because it's blocking. So it gives a chance for other things to run. But they also suspend too halfway through. And when blue carries on, it's just making progress. It's not that it's starting again. It's just now it's stopped blocking. It's ready to go again so it can make progress. But, and also notice that these are not running the same order around one print stuff. The order changes. So someone needs to keep track of, you know, how are the schedules and generally just keep track of all these coroutines going around. And that's where the event loop comes in. The event loop is in charge of keeping track of the coroutines. That's mainly the thing it does. And deciding which one's going to go next. I ran through this much quicker than I did last night when I tried this. Okay, anyway. So the code for using async.io looks like this. Yield from is new. It's similar to, we don't talk about it, but yield from basically allows you a two way channel of communication. And what it does is that usually when you just do a yield, like a generator, it just turns something. Whereas from allows you to kind of refactor your generator out of your generator. It sounds weird. It probably doesn't make sense. Just don't worry about it. Delegation. Yeah, delegation. It does that too. So just to walk through what's happening here. First we get all our coroutines. So I do work as a coroutine, basically, you know, a function that's a spent off way through, blah, blah, blah. We first create all of them with all our URLs. Then we need an event loop. So from async.io we can get an event loop. And then the run and complete, run until complete method allows you to pass it a bunch of coroutines or futures or whatever, in this case, coroutines. And it will run them, all of them, until they're complete. And I do an async.io that wait there because I want to actually wait for everything to be completed first. And the way it do work is that we first need to get the content of the URL. So at this point, this is fairly IOE. So it yields from get your content, which again, there's a lot of blocking there. So we can, while we're waiting for it to happen, we can just go back and run the next task. And that would be okay. By the way, there's a lot of different ways of writing this task. I was trying to make the shortest possible one so I can see all of them in one slide. But this is kind of how it works. And then you get URL, it has yields, so halfway through, if it's blocking, it can just, other stuff can carry on and do their work. And the performance of this looks pretty cool. So with number of URLs, it's pretty quick, right? And I think what's really cool about it is that the kind of line that it increases as you add more URLs is less steep than it was, let's say, in sequential case. So this is quite promising. So I'm just going to put all the four different approaches that I used, well, sequential, I'm counting that as one, next to each other to see how they performed for 30 URLs again, not so good at the end, which is the whole point of doing stuff like this. We can see sequential is just not going to happen. And threading multiple scenarios in IO, they are all fairly good. I tried running this on lots and lots of more URLs and async IO did outperform properly in this case. But again, these are, you have to take them with a pinch of salt because they are very dependent on tasks that you're doing. For IO bound tasks, you can use threading, you can use async IO, and that's fine. But if this was a CPU bound task, threading wouldn't stand a chance because of, you know, and async IO, it wouldn't do that well either, as far as I understand it. So multi-crossing would be the answer. So this, I'm tempted to conclude here, but I don't like making conclusions when it comes to parallelism. I think the whole point of it is every single task, every time I've come across a separate task, it's just different. You have to look at how is it, you know, is it IO bound, is it CPU bound, but how IO bound is it actually? So I can't make a conclusion and say, well, use code routines always or async IO or whatever. I think you have to be pragmatic about the tasks that you have at hand and just play with it a bit to see which one works well. So I'm definitely not going to say, I wouldn't use this slide to say, oh, async IO is much better. No, it's not. It really depends on what you're doing and the type of competition you're doing. So this was meant to be half an hour talk. I don't know why it's 20 minutes and 48 seconds, but this is it. Sorry to disappoint. Just to waste another minute, I, please, if you want to give me feedback, other than your talk was too quick, anything else, please get in touch. We can talk about it. If you want to try other stuff with my code and some other resources I've put together, some great links and videos of stuff that will be on that URL on GitHub right off the stalk. I'll do it when I get out there. It's there. I just have to make it public. Yeah, this is it really. QNPA. We have a bit of time though. We have time for like 4,000 questions. So did you ever do something crazy like combine these techniques, have multiprocessors that run threads and use async IO in the threads? So the idea for this talk, initially when I proposed it, was that at the end do something like that. But then I did not. Yeah, so I didn't realize I would have like 12 minutes of doing crazy stuff. So no, I didn't. Sorry. Hi. Thanks for the talk. It was very interesting, quite concise. There's something that's true. It's equality. There's something that really puzzles me. If you can, please show again the slides with the threading of the multiprocessing time. I really did not understand why we have one thread. It takes, we'll see the number. Sorry, one sec. This one, okay? You want the diagram? The times please. Let's check this out. The next one, I think. Or whichever the process of the next one, this one. Yeah. I really didn't understand why we have one process. It takes more than 30 seconds. Oh yeah. Sue, what did I do wrong? No, no, no. This is one process for 30 URLs, right? I mean, right? So in the sequential version, if you want to download 30 URL, this is basically sequential. So each URL takes roughly about one second just under. Okay. Does that make sense? Yeah. You scared me there for a second. I thought I got my axes wrong. Okay, thank you. We can talk about live too if you've run out of questions. We have another 15 minutes. About the live and everything 42. But besides that, what about G event? Yes. And green threads? So you can use stuff like G events on top of a sync. I haven't done it. So you can definitely use it. You can do that. Torne, they twisted everything. That was, I think the way sync I was designed was for it to be it for other frameworks like that to be able to build on top of it. That's why it's quite low level. But I don't have any performance for it. But I can do however if you're interested, if anyone else is, I can do that, add it to slides. And when I put the slides online, it could be included. Doesn't it? That would be great. Okay. Yeah. G event does not run on top of a sync. It's its own event loop. It's it's completely different. I'm wrong. Okay. Well, yeah, I have to look into that. Yeah, it's not right. But but I was under the impression if you get it. I was just saying that you said G event and twisted. So even does not fit in there, but tornado and twisted definitely do. So they can run on top of a single. But G event is a really thing that, you know, is more level and does its own way of getting correct. So you can run G event on top of a sync. I just want to make sure everyone's on the same page here. So yeah, good. You're paying attention. Not that I have the mic. Sorry. So one thing to add, you presented your from as the way to do delegation in a sync. I saw pattern three of five will have a new syntax for that, which was used was changed before because there were problems from a generator. So don't worry with proteins to and that was all a bit of a mess. And so there's new syntax now that goes on. Core teams are defined with a sync death. There's new syntax defining core teams. And then instead of your from inside of such a core team, you would use a weight. So I had, yeah, I actually opened a post is in the resources show you gotta agree those response to why he chose yield from and how that didn't happen. But that's that's also in the resources. It's a good read. He just goes through why he chose your from and not a weight. And then as you mentioned, the move. Guys, we have at least 21 minutes. Yeah, I think you just something very quick. What about memory overhead? Because I have been pushing processes can or even threats can take much more memory than a single for example. Yeah, they can. Thank you. So but again, I don't want to I really get nervous when I have to do a conclusion like that though. Yes, they do. But I think you have to look at your tasks to see what does but yeah, I mean, they do about the overheads. Did you happen to get that chance of testing something here like say, a four core hyper threading processor for the multi processing? I didn't. Maybe I should. My interest to see. I feel I've disappointed a lot of people today by not doing all this. It's time. It's fine. Yeah, if I can do it, you know what? That's great. I want to put this code up. Well, it is up. I want to make it public and like feel free to contribute. And next year, I can just say in talk and it can take long. Any more questions? It's lunchtime.
Shahriar Tajbakhsh - Parallelism Shootout: threads vs asyncio vs multiple processes You need to download data from lots and lots of URLs stored in a text file and then save them on your machine. Sure, you could write a loop and get each URL in sequence, but imagine that there are so many URLs that the sun may burn out before that loop is finished; or, you're just too impatient. For the sake of making this instructive, pretend you can only use one box. So, what do you do? Here are some typical solutions: Use a single process that creates lots of threads. Use many processes. Use a single process and a library like asyncio, gevent or eventlet to yield between coroutines when the OS blocks on IO. The talk will walk through the mechanics of each approach, and then show benchmarks of the three different approaches.
10.5446/20202 (DOI)
Hello, everyone. Thank you for joining me. My name is Sever and this presentation is about the library that I'm working on, which makes it easy to model and run distributed workflows. There will be a Q&A section at the end, hopefully, if there is not enough time. You can also stop me during the talk and ask me questions if anything is unclear. Let's get started. We'll start by discussing a bit what the workflow is. I will show you a quick demo and spend the next part of the presentation trying to explain what happened during the demo. The term workflow is used in many different contexts, but for our purpose, a distributed workflow is some kind of complex process which is composed of a mix of independent and interdependent units of work that are called tasks. The workflows are modeled with DAGs, which stands for direct acyclic graphs, dependency graphs between the tasks. They are modeled using some domain-specific language. Or with ad hoc code, like when you have a job queue, but what you really try to accomplish is to have an entire workflow and use the job queue and the tasks in the job queue to do some work, but also to schedule the next steps that should happen during the workflow. Neither of those provide a good solution. The reason for that is because DAGs are too rigid, you cannot have dynamic stuff happening there usually. The ad hoc approach where you have the job queues tends to create code that is hard to maintain because the entire workflow logic is spread across all the tasks that are part of the workflow. Another problem with the ad hoc approach is that usually it's very hard to synchronize tasks between them, so if you want to have a task started only after other tasks are finished, that's usually pretty hard to do. Flow takes a different approach for the workflow modeling problem, and it uses a single-threaded Python code and something that I call gradual concurrency inference. Here is the toy example of a video processing workflow. At the top we have some input data, and in our case, there are two URLs for a video and a subtitle, and then there is an entire workflow that will process this data, and what it will do, it will try to overlay the subtitle on the video and encode the video in some target formats. It will also try to find some chapters, some cut points in the videos and extract thumbnails from there, and will try to analyze the subtitle and target some ads for this video. The interesting thing here, and it's something that you cannot easily do with DAGs, is the part where the thumbnails are extracted. This is a dynamic step, and the number of thumbnail extraction tasks can be different based on the video, so this is where you need some flexibility. Next, I would like to show you how this workflow is implemented in Flowey, and then, like I said earlier, I'll try to explain what really happened there. Let's see. All right. I start with the activities, or rather the tasks, and in this case, I'm using some dummy tasks. You can see all of them have some sleep timer in there, just to simulate they are doing something, and they are regular Python functions. There's nothing special about them. They just get some input data, do some processing, and output a result. So this is similar with what you will get in salary or a regular job queue. This is the workflow code, so it's the code that would implement the workflow that we saw earlier. Again, it's regular Python code. We are just calling the tasks, but there is something funny about it because it has a closure, and we are not importing the task functions themselves. And there is a reason for this. This is a kind of dependency injection, and there is a reason for it, and we'll see later why this would be useful. Other than that, there are just function calls and regular Python code. Actually, I'm going to demonstrate that this is not anything special by running this code. So what I do here, I import all the tasks and the workflow function. I'm going to pass the tasks to the workflow closure, and then call the closure with the input data, and this will run the workflow code sequentially. I'm also going to time this execution. So it will take a while because of the timers that I have there, forcing the task to slip. And hopefully... Yeah, that's what happens. Sorry about that. I'll try again. All right. I know what's going on, but whatever. Yeah, something's wrong. So usually, it should work with... It's just regular Python code, so there is no reason for it not to work. But the interesting part here, so running that code would take about 10 seconds because of all the timers, and everything will happen in sequence. So the interesting part is being able to run this as a workflow and have all that concurrency happening. So I'll try to do that. Okay, so it went much faster about two seconds, and the reason for that is because all the tasks that could be executed in parallel were executed at the same time, as we can see in the diagram that was generated. So the arrows there represent a dependency between the tasks, and we can see a lot of them were being executed at the same time. So I'm going to try to explain how that works and why it went so fast versus the previous version, which didn't work. All right. So in order to understand what was happening during the demo, I have to talk about workflow engines first. And we begin with a simple task queue, where we have all the tasks that we want to be executed. The workflows are pulling the tasks from the queue and are running them. And as I said, when you have an approach similar to this, there must be some additional code in the task that will know to schedule other tasks when they are finished. So they also generate other tasks beside the usual data processing that they are doing. And this is not very good because the workflow logic will get spread. And like I said, it's also very hard to synchronize between different tasks. So another idea would be to have the task generate a special type of task called a decision. And what the decision does instead of doing some data processing, it will only schedule other tasks in the queue. So it acts as a kind of orchestrator. Like we can see here, the arrow from the storage to the worker is reversed because the orchestrate the decision will read data from the data store in order to try to get a snapshot of the workflow history and the workflow state. And based on that state and all the tasks that were finished, it will try to come up with other tasks that must be executed next. But this solution is also not very good because you could have concurrency problems. So if two tasks finish one right after the other, you can get two decisions scheduled. And if those are executed in parallel by two workers, they will generate duplicate tasks in the queue. So this is not a perfect solution. So in order to improve this even more, we need to have the queues managed in a way that all the decisions for a particular workflow execution will happen in sequence. And for this, we introduce another layer that will ensure this. Another thing we would also want to add is some kind of time tracking system that will know how much time a worker has spent running some tasks. So it can declare the tasks as time out if a certain amount of time passes without the worker doing any progress. So this is not something new. This kind of workflow engine is implemented and provided by the Amazon SWF service. It's also available as an open source alternative in the Eucalyptus project with the same API that Amazon has. There is also a Redis-based engine similar to this in the works that I know of. And there's also the local backend that you saw earlier in the demo. And the local backend will create all this engine and the workers in a single machine on a single machine and will run them only for the duration of the workflow and then everything gets destroyed. So hopefully by this time, this was the code that the workflow code in the demo. So hopefully at this time, you kind of get an understanding that this code will run multiple times. So every time a decision needs to be made for this workflow to have progress on it, this code will be executed again. So if I were to put a print statement there and run the workflow, I would see a lot of print messages. Okay. So I mentioned earlier about dependency injection and why that's needed. And the reason for it is because Flowey will inject some proxies instead of the real task functions. And the proxies are callables and will act just as a task would, but they are a bit special. So when a proxy is called, the call itself is non-blocking, so it will return very fast. And the return value of the proxy is a task result. And the task result can have three different types. It can be a placeholder in the case that we don't have a value for that task. It can, or maybe the task is currently running and we don't have a result for it. It can be a success if the task was completed successfully and we do have a value for it. Or it can be an error if for some reason the task failed. The other thing a proxy call does, it looks at the arguments and tries to find other task results that are part of the arguments. If any of the argument is a placeholder, then this means that the current activity or task cannot be scheduled yet because it has dependencies that are not yet satisfied. So it will track the results of the previous proxy calls through the entire workflow, like we can see here. So in this case, when the code is run for the first time in a workflow, the embedded subtitle task will be scheduled and its result will be a placeholder because we don't have a value for it. But the calls for the video encoding won't schedule any activities because they will have placeholder as part of their arguments, meaning that there are unsatisfied dependencies. And in this case, the results for the proxy calls for the encode video task will also be placeholders. So what this does, it's actually building the DAG dynamically at runtime by tracing all the results from the proxy calls through the arguments of other proxy calls. And finally, workflow finish its execution when the result, the return value contains no placeholders, meaning that all the activities or all the tasks that were needed to compose the final result are finished. And like you can see here, this is true for even for data structures. So we have here a tuple and the values are inside the tuple and this will continue to work and the templates there are in our list and those will also get picked up. So you can use any kind of data structures for the return data as long as it can be JSON serialized. That's what it's used for serialization. So there are a couple of important things to keep in mind when writing a workflow. Usually what you want is for all the decision executions to have the same execution path in your code for the same workflow instance, right? So for all the decisions that belong to the same workflow instance. This usually means that you have to use pure functions in your workflow or if you want some kind of side effects, either send those values through the input data to the workflow or have dedicated activities for them or dedicated tasks for them. So the other thing you can do with the task result is to use it as a Python value. Like we see here, I'm squaring two numbers and then I'm adding them together. And when this happens, if any of the value involved is a placeholder, meaning that there is no result for it yet, a special exception is raised that will interrupt the execution of this function. So in effect, this acts as a barrier in your workflow and it won't get passed until you have the values for the results that are involved. This also means that if you have code after this place that can be concurrent, it won't be detected. So you have to make sure that you access the values as late as possible to have the greatest concurrency. A similar thing happens in the original code of the example where we iterate over the chapters that are found in the video. So here, too, this acts as a barrier, but being at the bottom, it didn't affect the rest of the code, so you may have not noticed it. Another example is when you have a situation like this one, so here I'm squaring two numbers and then I may want to do some optional additional computation and it's not clear in what order the if conditions should be written because in this case, if the b computation, so squaring of the b is the first one to finish, because I have the conditional on the a value, it will have to wait until the result for a is available to progress further in the workflow. And no matter how I try to write the code, there will always be a case where the workflow cannot make progress until the other value is available. And this is kind of a problem, but it can be solved with something that is called a subworkflow. So here I refactored the code that did the processing for each number in part in a subworkflow. And then in the main workflow, I'm using the subworkflows as I would use a regular task and this way they can all happen in parallel and when both are finished, I can sum them and return the result. So workflows are a great way to do more complex things that you couldn't without them. And another thing to notice here, in the main workflow, I didn't have to do anything special to use the subworkflows. They are used just as regular tasks. So for error handling, you might expect the error handling to look something like this. This is how a normal Python code would look like if you had some exceptions in a function. But this is not possible because, as I said earlier, the proxy call is not blocking, so you cannot get the exception at this point. So actually, this is the place where you have to write your try accept clause. So the reason for this is because only at this point we can force the evaluation of the result and only at this point we know for sure if the computation was successful or not. And this looks a bit strange and I don't like it too much. There is a better way of doing it using the wait function and it comes in flowy. And what this does, it will try to de-reference the test result and it's similar as doing an operation on it. And the name is a reminder that this will act as a barrier, so nothing will pass this point until and not only that it won't pass this point but won't be detected even if it could be executed in parallel until this value is available. But this is not always the case. You are not, maybe you don't want to use the value in the workflow itself, you just want to pass the value from a task to another task. And in this case, how do you pick up errors? So what would happen here if the result for B is an error? When you're passing an error in the arguments of another proxy call, the proxy call will also return an error. So the errors propagate from one task to the other and if the result value that you try to return from the workflow contains errors, then the workflow itself will fail. So you cannot dodge errors, you have to deal with them or you can ignore them by not making them part of the final result, in which case you will get some warning message that you had some errors that were not picked up by your code or handled. So the workflows can also scale by using some of the other backends that I mentioned earlier, the Amazon one or Eucalyptus. And there are, when you want to scale, basically nothing changes in the workflow, so you would still use the code that you saw earlier. There are some additional configurations that you have to do that happens outside of the code, so are not part of the code. Because when you scale and you want to run the workflow on multiple machines, in a distributed system there can be all kinds of failures, there are some execution timers that you can set and those will help you with fault tolerance. There is another type of error that you can get when you scale, which is a timeout error which is a subclass of the task error that we saw earlier, so you can have special handling for timeouts. There is automatic retry mechanisms in place and you can, for the timeouts and you can configure them as you wish. There is also the notion of the harbids and the harbids are some callables that a task can call and what it does when a harbid is called, it will send a message to the backend telling the backend that the current task is still doing progress, but another thing that it does, it will return a boolean value in the task and that boolean value can be used to know if the task timed out, in which case you can abandon its execution because even if it finished the execution successfully, its result will be rejected by the backend. Another thing to keep in mind, you should aim to have tasks written in such a way that they can run multiple times just because of the failures that can happen and the retries. The tasks or the activities, I'm using the, they mean mostly the same thing, can be implemented in other languages, so you can use flowy only for orchestration and workflow modeling, so the engine and the logic to run the activities. There are some restrictions on the size of the data that can be passed as input or the result size. Each worker, so when you are scaling and you run multiple machines, you would have workers that are running continuously, not like we had for the local backend where they were running only for the duration of the workflow and those workers are single-threaded, single process, so if you want more of them on a single machine, you have to use your own process manager and start them and make sure that they are alive. And if the history gets too large, so the decision must use the workflow history, the workflow execution history and the workflow state to make decisions, and if the history gets too large and actually the history, the data that is transferred by, because of the history, has an exponential growth, you can reduce that by using sub-workflows. Sub-workflows will only appear as a single entity in the history, so you can get, basically, you can get logarithmic data transfer by using sub-workflows in a smart way. And because of the fault tolerance building, you can scale down, so you could, like, for example, all the workers can die at some point in time and then after a while they would come back online and the workflow progress won't be lost. You may still lose the progress on specific tasks, but the workflow itself, the workflow progress won't be lost. And this is very useful for workflows that take a very long time to run. I think the maximum duration for Amazon is like one year for a workflow, so this can be very useful in some situations. And you can also scale up very easily, just start new machines and they will connect to the queues and start pulling tasks that need to be executed. Thank you. That was all. If you have questions, I think now it's a good time. How does this compare to Celery? There is Celery, you can create tasks and it will automate them. How can you compare it? Yeah, so Celery is a distributed task queue or job queue. And it's a bit different because here you have the orchestration of the tasks, so if you have many tasks and you want them to operate in a certain way with some dependencies between them and to pass data between them, you can do that by writing single-threaded code and from that single-threaded code, the dependency graph will be inferred for you and it will make sure that the tasks are scheduled in the correct order and they get the data they need passed in. So I would use Celery for one-off jobs, sending an email or something, but not for hundreds of jobs that are somehow interdependent. Yeah, it also has Canvas, which is more like a DAG where you define your workflow topology before not in such a dynamic way you can do with single-threaded Python code where you can have conditions and for loops and all that. Thank you. What asynchronous library you use as the bottom of the flow? Sorry. What? What does it say? Asynchronous library. I think it's maybe event. I don't think I'm using any asynchronous library. For the local backend, I'm using the futures module to implement the workers, but there is no asynchronous library involved. Okay. Thanks. Yeah, in the example workflow, you showed one of the tasks returns the list, so the list of chapter points that then gets fed into something that builds thumbnails for the chapters. Do you have to wait? Does that task essentially block until every single chapter has been found? Or would it be possible maybe with code changes to support, say, a generator function so you could start building a thumbnail to the first chapter while the task is still finding the later chapters? So here it will block. So any code under the thumbnails line won't be executed until we have the chapters. And this is because the fine chapters returns a list, and it's a single result, and we cannot get partial results from the task, so we have to wait until the entire result is available. So anything below that will be blocked until the result is available. And this isn't such a big problem usually because there are ways to write the code, and this doesn't become a problem, or if it is a problem, you can create a sub-workflow. So I could have a sub-workflow that would do only the fine chapters and the thumbnail generation and then call the sub-workflow from here and have that running in parallel with the other code. Sorry, just to follow up then. Does that mean that in this example, add tags, which you could start processing immediately, won't be executed immediately because you're waiting for the video encoding to finish? No. So in this case, in this example, all the tasks that can be executed in parallel will be executed in parallel. So the actual execution topology will basically look exactly like this one. So this is how it will get executed. That's why the workflow duration was about two seconds instead of 11 or something. The time for the last question. Kind of a repeat of the previous one. He yet made a good point about not the thumbnail line, but the line above where it's finding the chapters and returning the list, it won't return from fine chapters until it's found all three of the chapters. But if you could convert fine chapters to be a generator or get it to return next chapter and then you can do the thumbnail for the first chapter while fine chapters is still finding the second chapter. So yeah, you could have a task that will only find the first chapter and return that and then call the task again and it will resume from that point. You can actually send the last chapter and find the next one. And this way you can solve the problem if you want to. It really depends on how you write your code. The only rule you have to remember is that when you try to access a value in the workflow, it will block until the value is available. That's basically the only thing you need to know. Anything below that point won't be detected and cannot be concurrent. And that can be solved through sub-workflows. So, thank you very much for your talk. Thank you. Thank you.
Sever Banesiu - Distributed Workflows with Flowy This presentation introduces Flowy, a library for building and running distributed, asynchronous workflows built on top of different backends (such as Amazon’s SWF). Flowy deals away with the spaghetti code that often crops up from orchestrating complex workflows. It is ideal for applications that do multi-phased batch processing, media encoding, long-running tasks, and/or background processing. We'll start by discussing Flowy's unique execution model and see how different execution topologies can be implemented on top of it. During the talk we'll run and visualize workflows using a local backend. We'll then take a look at what it takes to scale beyond a single machine by using an external service like SWF.
10.5446/20199 (DOI)
please join me in welcoming Sam. Thank you. Can everyone hear me, is that too loud, too quiet? That's ok brilliant. Well, thanks for coming to this talk. I'm going to be talking about integrating software into collections of software that work together which sounds easier. Sounds like a solved problem, but it actually isn't. It's how many people do that here, working distributions or creating software software for embedded systems. How many people here? A few. Do you find it easy? Are there pain points? The base work project is developing tools to try and make this process easier. The goal isn't to replace traditional distributions. The goal is to develop. It's kind of a research project to develop a set of parts which work together but also work independently so people can adopt any of the parts that they see useful for them. All the tooling is written in Python. It's all free of legacy. Well, the project started about four years ago, so it's not completely free of legacy, but more free of legacy than most existing things in this area. The project started with this problem, build a working Linux operating system straight from the source code. If I ask how many lines of Python code, do you think it would take you to do that? Any guesses? No guesses. Well, I'll give a couple of hints. The project of dealing with source code, we have a solution for that, which is a server which mirrors every popular form of version control and mirrors tar balls, all into Git repositories on one server. The build tool doesn't have to deal with downloading tar balls from random places or anything else. It can consider that everything's in Git. Most things are in Git now, but it imports things from a curial, subversion, and whatever else, so you get a consistent interface. Also, all the build instructions that you need are spelled out in this consistent YAML format. This is an example of a simple build instruction for bin utils. This is the instruction we have for Python, the C Python interpreter. It says, use the standard commands for auto tools, but override the configure commands, run something when it finishes to create a sim link. We have a reference distribution which describes how to build a whole system in a form like this. There's another slightly more complicated YAML document which then says what ref to build and how to fit it all together. That's it. With these parts, we actually have a build tool which produces a work in operating system with about 2,000 lines of Python, which is an order of magnitude simpler than anything else you'll find in this area, I think. That's because we've taken the approach that writing a build system should be easy enough that this squirrel monkey could do it. If we solve the problems around all the problems in the area, then the build tool itself becomes trivial, which is good because writing a build tool is quite a thankless task. Nobody really enjoys doing it. If we remove all of the problems around it, then it becomes trivial, at least by not trivial, but fairly easy. Lines of code is a bit of a horrible metric. I don't want to assign too much meaning to it. The tool in question is a prototype. It's called YBD. We have an older build tool as well called Muff, which is a lot bigger. I'll go through the bits of base rock. What do you need to actually build such a system? These are the items that you need, really. I'll go through each of those in a bit more detail. The source code mirroring service, that's a server appliance called Trove. We have one running, which I should be able to show you in a browser. Here's one we have live at baserock.org, and it contains lots and lots of Git repositories. Quite boring, but it's good to have a consistent interface. There's an easy way. You submit a patch against a repo called Lorry's, and it mirrors more things. You can also set up your own instance of this, or the actual mirroring tool at the heart of this is a simple script called Lorry, which takes a JSON file, which describes where to get source code from, and pushes it into Git. That's source code mirroring. You then need a way of describing build instructions. I'm going to go into that in more detail later on, because I think it's one of the most interesting parts of the project, so I'm not going to touch on that now. You then need a build tool to actually... Hang on. I've done this in the wrong order. You need a language for creating build instructions, and then you need some actual build instructions. We've defined the syntax for describing how to build stuff, and then in order for the tool to be useful, we have a set of definitions that you can use to build a system as well, but you don't have to use those definitions. It's quite hard to visualise build systems and build tools, so I apologise for the fact that a lot of my slides are screenshots of a terminal. I also have a few diagrams, but this is the list of package groupings. We call them strata. They're like layers in Bitbake, and you can see if the text is big enough, there's some fairly standard packages, GTK, Qt, various Python libraries. OpenStack is in there. You can actually use base rock tools to deploy an OpenStack Juno instance, which is quite impressive, I think. Going back to building a working GNU Linux system, we release... Every so often, we release one of the reference systems called the build reference system, which you can download from here. I'll just show that it does, in fact, work. This is me loading the VM image in QMU, so this is a base rock reference system, and it boots to a bash prompt, and there we are. It's a Linux system built entirely from source code. One cool thing about stuff that's built with base rock is every image contains metadata that shows you exactly what repo and what ref everything was built from. This is slash base rock directory. Is that big enough, by the way? It's QMU. I can't really make it any bigger without using a serial console, so I apologise if you can't read it. But there's a bunch of metadata files, one for each component in the system, and then fire up on one of them briefly. It contains metadata about... This is the Zlib component, and it was built from... It was built with these build instructions. This was the environment. These were the versions of the dependencies, and at the bottom, it shows you the URL of the repo it was built from, exactly what SHA1 it was built from. If you've got the source code mirrored in the server I showed you, then you can go from any system that's been built and you can look at exactly what commit of what Git repository everything was built from. There's no... Oh, this system broke, and I can't actually work out what I'm running. That problem goes away. We have the build instructions. They're in a repository on git.base.org. We call them definitions. Then we have a tool, in fact two tools, which you can use to build them. Source code from Git goes in. The build tool just runs a sequence of shell commands in the right order, which, like I say, should be easy, and produces a binary. Then we have an artifact cache, which just holds tar balls of binaries. There's two tools. Morph is the older one, which has a lot of features, some of which it doesn't actually need, it turns out. It has some quite cool things. It has a distributed plug-in, so you can set up multiple Morph workers and have them share builds at the component level. If you've heard of disc.c, which distributes at the level of the source file, Morph can distribute at the level of the actual component, so you could have different packages compiling on different systems. YBD is more of a proof of concept that shows that you can make a radically simple build tool. They're both available on git.baserock.org. After you've built something, it's not much use having a tar ball, really, so you then need a tool to deploy it. Deployment is a bit more messy than building. I think building is quite a well-defined problem. I say building. I should be saying building and integration, because there's more to it than just running compilers. The output is a binary. Once you try to run the binary, you need to do some extra work to deploy it. For example, if you want to deploy to OpenStack, you need to create a disk image, upload that as an image to OpenStack Glance, and then boot it. If you want to deploy it to Docker, then you need to import it into Docker as a tar file. If you want to deploy it to real hardware, then you may have to put it on an SD card, wait five minutes, take the SD card out, put it in the machine. Deployment is a bit more messy. There is tooling in baserock to do that at the moment, but actually I'd like to get rid of it completely. How many people here know Ansible? Good. The Ansible is great. I'd really like to replace our deployment functionality with an Ansible module. We don't have to think about that in baserock anymore, because I think Ansible solves a lot of problems really neatly. You have deployment. The last piece of the puzzle is caching, because you don't want to build things more than once. Because of the way baserock tracks the inputs of everything it builds, and it builds everything in an isolated staging area, like an isolated charoute, you can be sure that if you run the same build twice, you get the same thing out. Not always the same bits, although we are working on that, but you get an artifact which works the same each time. You can cache, basically, by hashing all of the inputs and all of the dependencies, coming up with an identity, and then saying, right, this is what I've built. I'll refer to it with this hash. If something's already built it, then it's already cached, and you don't need to build it again. We have a simple cache server which you can use for storing artifacts. There are a couple of other bits. We have a continuous builder, which is really just a shell script which runs the morph build tool over and over again, so it's not that interesting. I won't talk about that. I talked about sandboxing builds. We recently spun out the code to do sandboxing into a simple Python library called sandboxlib. It has one API, basically. It has one function call, and a couple of others, one main function call which runs a command, like the subprocess.p open, but you can specify a couple of things that you do or don't want to share or isolate. You can say, put it in a new isolated mount space so it can't see the mounts from the system, or put it in a new network namespace so it can't connect to the internet, or mount these extra directories from the host, or make certain bits read only. It doesn't implement that functionality itself because there's lots of tools that already do it, but they have different strengths and weaknesses. For example, a lot of the containerisation tools like Docker, and systemd, and rocket need to be run as root, where you can use a much simpler tool called linux user charoot and run that as a user. Most of those are linux specific, so it also has a charoot back end which will run on any POSIX OS, but doesn't support most of the sandboxing capabilities because in a charoot you can't, or using just POSIX APIs you can't say open a new mount namespace because they're a linux specific feature. The charoot back end is fairly incapable, but it allows you to degrade the sandboxing capabilities if you want. There's the chart again filled in with the names of some components. I said the part that interests me the most about baserock is the definitions language, which we refer to as declarative build instructions, or declarative definitions. The idea is to turn build instructions into data. At the moment, they're code, there's lots of build instructions in the world. Debian has build instructions for 10,000 or 100,000 packages, but it's all code. It's really hard to reason about it unless you understand all seven build systems that Debian has developed over the years. Diclarative build instructions, we want to treat the build instructions as simple sequences of commands so they can be treated a lot more like data. We discourage ad hoc implementing features in shell scripts in the build instructions. There's no logic for the build tool mixed in. If you look at build route, which is a tool written largely in make for building systems from source code, build route is great, but nobody really understands how the core of it works anymore because all of the instructions are written in make and tied up with the build definitions themselves. While it works, it's quite difficult to actually make changes to it anymore. Finally, I really don't like shell scripts, so I'd like to minimise the number of shell scripts in the world. I'd much rather have everything as data on Python scripts. What we've done is defined, this yama language was defined a few years ago, and we're now trying to rationalise it and turn it into something formal and useful outside base rock. I defined a schema of the current data model. I tried to make a nice graph, and instead I came up with this graph, which shows you the entities we have at the minute. We have a command sequence, that's the fundamental unit of building something. You run a sequence of commands, for example, configure, make, install, and then there's something called a chunk, which is kind of like a package. We have these grouping called strata and systems, which I think in the future we'll do away with and just have one sort of component that contains other components. Really, I think the main problem in doing that work is coming up with a word, which means component that contains other components without having it be really long or really weird. As they say, coming things is one of the hardest problems in computer science. At the moment we have this data model, which is still fairly simple, the final entity is the cluster, which represents a cluster of systems. When you deploy something with base rock tools, you deploy a cluster, even if there's only one system. We have our reference systems repository contains a set of chunks for things like Python, GTK, Qt, different Python libraries. It contains strata, which integrates those into logical grouping. For example, there's a Qt5 strata, which contains the various bits of Qt that you need to use it. Systems, which have a specific purpose. For example, the OpenStack server system contains a bunch of different things. Its purpose is to deploy an OpenStack system that you can then host other VMs in. There's also a build system, which has build tools in such things. It's easy to define your own ones. I meant to show this earlier, actually. I was going to show YBD starting to build something. It won't finish because it will take hours and I will probably run out of time. This is the reference definitions repository in the systems directory. I'll see if I can make that a bit bigger. We define the build system. It contains a simple list of the strata that you want. For example, core Python libraries, the BSP, which contains Linux and a bootloader, different Python libraries, Ansible, Cloud and It, and such things. If I tell YBD to build that, I'm not entirely sure how far it will get because I'm not sure if I'm connected to the internet or not. It won't get too far anyway. It's still loading things from disk, in fact. I'll come back to that. Another interesting thing we can do with once the definitions are considered data is there's a lot of existing data analysis tools, which you can use to look at them. I made this... This is YBD actually building something. It's calculated an identity for each component involved in the build. Pretty soon it will get to the point of running some running configure for bin new tools, probably. There we go. This is what a base rock build tool looks like. It's just running a command, and this will take in about four hours. You'll get a system out the other side, which I won't show you. Going back to browsing the definitions, I found an awesome Python library called rdflib web. Rdflib lets you deal with link data in Python. Rdflib web lets you create a really simple browser to explore it. This is running on my local machine. I implemented it in about four lines of Python using rdflib web. It shows you all... I can look through what a chunk is. It has these different properties. Then I can look through all the chunks that we have defined in the reference definitions. Here's Cpython. That defines some configure commands, for example. Then it shows me the linkage between them. That gets referred to in a few different strata, for example. My point is that this is really easy to do. Once build instructions are represented as simple YAML files or stored in a database, you can reuse analysis tools like this, which has not developed at all for build tools, but it's a general purpose thing. We can now use it for analysing build instructions. I'd like to generate some interesting graphs in future as well, having been to a lot of data visualisation talks yesterday. I'm very interested in making pretty graphs and network diagrams now. The final part of the talk is how this can be useful for Python development and how many people use virtualenv. Virtualenv is really useful. Quite a simple way of isolating your Python dependencies. It has a few problems, which is if you want to install a library that needs a system library and you don't have it installed on your system, there's nothing virtualenv can do about that. You can use the baserock tooling to build a container which tracks all of the dependencies that you need, rather than just the Python ones. I wouldn't recommend if you don't have a problem with virtualenv, keep using it because it's much more convenient. But if you find yourself reaching the limits of what virtualenv can do and finding they actually have to start installing packages and tracking dependencies elsewhere, baserock gives you a way of defining everything, all the Python dependencies, all the C library dependencies, right down to the toolchain you use to build it. Having definitions by hand is a bit boring, so we have a tool called the import tool, which can import metadata from other packaging systems. We developed a way of importing information from PyPy. Quite a lot of work went into this, quite a lot of research by one of my colleagues. We tried looking at the source repos of Python projects and using a patched version of PyP to analyse what dependencies it expressed. It's actually quite difficult to get information that way. The problem is, again, because setup.py is code, people can really do anything there. You find setup.py that don't make sense to PyPy when you run it in the repo. What we've ended up doing is we have a solution which sets up a virtualenv environment, uses PyPy to install a package, and then uses PyPy freeze to get the list of dependencies. It's not the most efficient solution because you have to compile any embedded C extensions or other things, but it has the advantage that it always works. Does anyone want to see this? The idea is to generate something which can be used in a tool that's useful outside of Python libraries. I can show you, if you name a package, and I can show it working if I have an internet connection. I can show you an interesting one, Alex ML. I found that some packages, some that you expect to have a lot of dependencies don't actually list any, for example Django and NumPy don't list their dependencies in a machine readable way. They list them in the read me. Sadly, no. I guess this is going to do quite a lot of compilation. I shall leave that. The final bit I want to talk about is why we're doing this. There's a few reasons. One is that hacking and operating systems is quite fun. One is that there's a lot of best practices today, which some people follow and some don't. We find ourselves cleaning up in projects where the best practices haven't been followed. Making tooling where you can't actually avoid following best practices is a goal. Some of these are not depending on third party hosting. Most build systems today download tar balls from upstream websites, which is great until the website disappears or gets compromised. Recently, Gatorious.org went offline, for example, forever, and all of the source code mirrored on Gatorious disappeared, which would be really annoying except that we'd been mirroring all of the projects we needed for years anyway. It didn't make much difference. At some point, we have to find the new upstreams for the ones that have moved so they keep up to date. You can imagine if you have a build system which clones stuff from Gatorious and then the day before your release, it disappears. That's a real problem where if you have a source mirror, you're insulated from that. Making a source mirror is really easy using the Trove server appliance. Trusting third party binaries is another thing, which seems to become really common at the moment with the rise of Docker, which is great. Download a binary which you can't really inspect the source for. Run it as root on your computer in a bunch of namespaces. No. Please build things from source instead. That's why we want to write tooling which builds everything from source so you don't have to trust random binaries downloaded from the internet. Two other things, keeping things up to date and making it as easy as possible to fix them upstream. Because everything's in Git, you can clone any component that you think there is a problem in. You can clone it straight away from a local server. You don't have to worry about what format it's in or anything else. Then once you've worked out what the problem is, you can then relate a date and submit it to the project. We discourage patching things in the build instructions. A lot of distributions carry endless patches against projects which never seem to get upstreamed. Some of them can't be. Some of them are legitimate things which are distro specific. We really want to discourage patching things because it makes it more difficult. Then you come to upgrade from Python 3.4 to Python 3.5 and it turns out half your patches no longer apply so you don't upgrade for a long time. We encourage building things directly from source code. That's all I wanted to talk about. Thanks a lot for listening. I'll be happy to take any questions. Hello. Thank you. I have a lot of questions. First, can you compare your system with Packer? With what? Sorry? Packer. Yes, I have used Packer. Packer starts by taking an image that's already built. It will take a say in a Bunti base image and then it runs. It can run a bunch of different commands like it can run a chef and run a chef script or run ancibal and run ancibal script. Then it can deploy the image somewhere. It's in a related area. They overlap. I thought at one point about writing a base rock plug-in for Packer which could instead of starting with an Ubuntu image, start by building or using a cached version of a system from source code. The answer is they can interoperate. At the moment they don't. I'd like to look at how to integrate base rock with Packer. Does base rock work on Windows or on the Unix systems? The tooling only works on, well, YBD works on any Puzzic system. Most of some of the tools only work in base rock itself to free us from having to track dependencies and make it work on all distributions. So Linux or Puzzic. About containers. Some containers tools like Rocket or Waga or LXE can work without it. Do you use them on Puzzic systems? Not at the moment, no, but I'd be interested in implementing that in the sandboxing library. If you want, I'll tell later. Great, yeah. Last question is about, do you know about Nix's package manager? Nix OS. Yes, that's an excellent question. I do know about Nix OS and think it's a great project. I'm terrified of the complexity, but I would very much like to align everything we're doing with them as it becomes possible. Okay, thank you. I'm a bit slow between the errors, so forgive me. You probably already addressed this. Do I understand correctly that with base rock I can do a sort of Gen2 type system where the entire system is built from source, but there's no way that I can start from a CentOS base or Debian base that's correct? Yeah, that's it, yeah. Okay, so that packer integration would be pretty awesome. Maybe I should go write that myself. Thank you. Hey, so if I understand correctly, this build is happening as it's a cheroot that you're essentially running these commands to put binaries into the cheroot. You mentioned there's an integration thing that happens afterwards if you're going to perform modifications of things in the cheroot, I'm imagining. I saw post install commands. Yeah, so there are post install commands. Basically, those exist so that you don't have to override the default install commands. So for example, for auto tools, the default is make install. Okay, so you're not actually having to execute any commands inside of the cheroot itself? Yeah, those commands all run inside the cheroot. Okay, so now I'm wondering how do you deal with architecture differences, for example, or things of that nature, where your build host doesn't support the target, like running executables inside the target. So cross compiling. Yeah, that's one example. Yeah, it doesn't support cross compiling deliberately to avoid the complexity of supporting cross compiling. Okay, well, yeah, there's a whole other set of scenarios that I've run into a similar tooling where like SE Linux is another example, right, where your build host doesn't support it or cross major kernel versions, that type of thing. We recommend running build inside a base rock VM or cheroot. And so the only thing that affects us is kernel versions. So there is a requirement on what kernel you have, but you can get around that by using a VM. Okay, so you're trying to use the same target and build host essentially? Yeah. Yeah, okay, cool. Hi, so I have a couple of questions. How do you bootstrap this? Like, where do you get make from? That's a good question. The bootstrap is actually quite interesting. It's based on the Linux from scratch bootstrap. If you want to see the gory details, you can look in the definitions repository, which is... I know, I get server. The gist of it is that we start by building from tar balls. We have a bootstrap build mode, which happens outside the cheroot and uses the host tools. So that builds... I think it builds a GCC and bin utils. And then with that, it builds a stage two, which is six components, I think, make GCC, busybox and glibc. And then it builds everything again with those tools in a cheroot. So we use basically clever ordering. The actual... the description of this is in here. So it's kind of explained in comments. And you see it starts, it does stage one bin utils, stage one GCC, and then Linux API headers, glibc and so on. And the bootstrap is quite good because it's really easy to cross bootstrap to a new platform. So base rock has been ported to a bunch of different architectures already, like ARM and MIPS. And we did an ARM big endian port, which I think is one of the only OSes you can run on ARM big endian at the moment. Because you only need to cross build about six things and then the rest you can native build. Okay. And so follow up question. If you... if there's a security vulnerability in say glibc or something low level, would you... I assume the implication is you'd have to rebuild most of your image. There's a way that you can cheat by adding a new version of the component on top. So if you wanted you could add glibc again and overwrite the existing version and deploy that as an upgrade. But yeah, the design of it encourages rebuilding everything from source, which isn't ideal when doing a security update. You need a lot of compile machinery. The more of us providing distributed build work is the better. And that's an area. So we use NICs. And you clearly are trying to fix the same kind of problems that they are trying to fix using similar components. So were you aware of NICs when you started your project, Bacerock? I was, yeah. I wasn't actually one of the founders of Bacerock. I've kind of got involved in it later on. I was aware of NICs. I've never really used it much. I found it has quite an area of complexity. The build definitions rather than being data, a sort of functional code. So I think long term we definitely need to align the two projects. But you didn't use NICs because you were scared of it? In a way, yeah. I think the people who originally came up with Bacerock didn't at all think of using NICs. So some of it has been developed in parallel. Okay. Thank you. I should add part of the original goals of Bacerock is to reduce complexity. Oh, yeah. Let's see if Alex Melle has done anything. There we are. It's generated a strataen which has Alex Melle and C-cython in it. I can show it to you in here. So that's quite a simple example in the end. It wasn't the most efficient solution, but it worked. So it just saves you writing definitions by hand for things where there's metadata that already exists. There's also importers for Ruby gems, NPM, something else. Now, is that the least useful bootable Linux distribution ever? I don't know. It depends how much you like using Alex Melle from the console. Any final questions? Okay, great. Thank you very much, Sam. Great presentation.
Sam Thursfield - Introduction to Baserock The Baserock project is about creating system images from source code in a clean, reproducible way. All of the tooling is written in Python. In this talk I'll explain a bit about the core idea of Baserock: declarative system definitions (expressed in YAML) that can be built and deployed in various ways. Then I'll go into more detail about the tools available, and some of the cool things that they can do: distributed building, atomic system updates, creating custom container images, and more.
10.5446/20195 (DOI)
ʻeskari kasko bil boko yuro python topaketera etot ziaketek. Etta beraziki garapen rasaren zeldira ke chira ziaketek ek. Esda eskara izanbea emen egeto. Grafias povenia a este enqentro do yuro pythonon bil bau. Esda chala sobre de sero simple. Nohe kese intelikenti pa esta ekki. Thank you for coming to yuro python to bil bau and to this talk on dumb development. No need to be smart developers here. Now, you may have worked out from the picture there. That's what you've got to do with your piece of paper to start with. To fold it into two down the middle and then into three sideways. So you end up with six roughly equal size squares. And then number them from one to six. And meanwhile, we'll look at the next slide. We've talked this week about good development principles and dumbdev is about some good development principles. I'd like you to look at that list from 12faxa.net. This is what a good software as a service. These features, it should have. I'd like you to look at that for a minute and try to remember it. I'm going to ask you questions on it. Imagine you're going to get five pounds for every one you remember. Okay, now, I'm going to take it away and ask you now to write down in square one as many of those as you can remember. So how are we doing? Three, any advance on three? Four? Six, pretty good. Have you finished writing that? Let's put it back there. So just check which ones you got and which ones you missed. But you can see it's quite a hard task like that. And the problem with us as programmers is that we don't do very well on memory for things. And when you come across a code base that is the big ball of mud anti-pattern design, often you have to remember vast amounts about the code base that is not helpful or necessary, and we aren't good at doing that. And it occurred to me that when I couldn't do the necessary memory tasks and understanding like that, that it wasn't necessarily my problem, it was the problem with the way the code had been written. So let's think how we can simplify things. Well, we've got a nice symbol that we're used to using a lot in many different circumstances, the hash, which is also like a noughts and crosses table, tick-tack-toe. And so I had the idea, let's see what we could do with just a noughts and crosses like that. So in the middle there's a hash. And as a means of specifying how we can simplify things, we say you've got a concept and you may not have more than eight concepts around it. It may not branch more than eight times. Possibly you might want to fill the top three and the bottom three. You might only have six branches. Possibly you might want to fill the four corners and only have four branches. But the principle that there has to be some limitation on it is that in order that we don't get to the point where we cannot understand and remember all the amounts of branching. So what I'd like you to do in square two of your piece of paper is to draw a noughts and crosses hash symbol to fill that square and put the title in the middle, good things from the conference. And then the eight squares around to list eight things that you've really appreciated this week at EuroPython. So you've got a minute to do that. Okay. And then in square three do the same thing, draw a diagram like that and then things that, it's no good going to a conference and not changing what you are doing as a result of having been at the conference. So thinking about you've learnt some good things this week, you've got eight things, up to eight things, are you going to do differently when you get home? So that's in square three in the same arrangement. So the title will be something like changes to make. And what are the eight things you're going to change? Okay. Now the practical part, I like you to get together in a pair or a three as it sorts out in the row and just discuss those things. Just challenge each other and introduce yourselves and find out what your name and place you come from. And then preferably if you're sitting next to someone you know, go and find someone you don't know and ask them what they've enjoyed and what changes they're going to make. You've got five minutes. If you're on your own, go and find someone else who's on their own and make up a pair with them. You've got five minutes. You've got five minutes. You've got five minutes. You've got five minutes. You've got five minutes. You've got five minutes. Okay. The next idea that develops from the hash sign. So we've got the knots across his table. And I was talking to Anders Hammersquist last night and he was say, had the idea of supposing you actually restricted the complexity of branching in two directions. Because it's not just the number of things you've got to think about at one time at one level, but how deep do you go? So if you look at a cube like that, the central yellow square has eight things around it, maximum, you just can't fit more in. But it actually makes sense in a lot of software development that you can't actually go more than two levels deep. So if I'm drawing a diagram here and I want to say, I've got to my eight maximum, I've got a thing here that is too complex so I'm going to branch it out into another eight. And then in that eight, things get too complex so I'm going to branch it out further. If you go more than two levels out, it's getting too complex to understand. If you fill up the squares and branch out two levels, you get over 500 concepts. If you've got a programme with more than 500 concepts, it's too difficult for people to understand. Because what we want, we don't want people who have to have worked on a project for a year in order to be useful. We want to be able to bring developers on board and say, okay, here's the code, here's some documentation, it's too small, too weak. It's too little, it's too fast, but so much. We don't want to have knowledge. And then, over confidence, because my crane was working, zten o to låf 1 lo, w dr equival gens o profwispers lam set afdwás kan ski tu, o começa K ys pinernal fe ty difre hw sha blả t a seafood interactions you've moved the complexity out so instead of micro services you could have mini services or midi services or maxi services but you know midi has the right sort of level of in between sensible getting it right so taking this idea here what i've done is to make a slide where if i explain some of these this is just a mnemonic method of saying well let's use this concept so dumdev fit the letters in the title goes in the middle and so first of all we want to decouple things so this is all what we're talking about in separating out services putting things into nice containers and the the end there is for midi services now um i'll come back to that and explain a few more of those but just let's go back here sisco has a problem they're 30 year anniversary and the company does lots of good networking stuff but they're known for acquisitions they're one of the highest acquiring companies that having acquired more companies than microsoft or ibm and have a reputation for development by acquisition rather than by innovating and so their current campaign quite sensibly is to connect everything to innovate everywhere and to benefit everyone which you could think would be quite a sensible thing to do they are a networking company but if you think about the big ball of mud that isn't how we want to develop software so that we would rather be saying decouple everything rather than you probably met developers who develop as if they were connecting everything and so you can't test things separately you can't easily work so decoupling is a good principle sisco also wants to innovate everywhere well innovation that's good isn't it well is it because innovation is hard work and requires lots of brain power and we're dumb developers um so whereas maybe years ago i would think what is it that i've got to write today and i'd be able to tell clients oh i write so many lines of code today and the more i write the better and nowadays you think well i don't want to write code it's probably been done before far better and it's available uh free of charge i go to the relevant search engines and look for it what someone else has done do i really want to innovate you know it's not many areas in software development that we're actually innovating there are some but you run of the mill day-to-day programmer they shouldn't be innovating but they shouldn't also i would argue be acquiring as in i'll have that code uh and i'll use it and that's thanks very much you know it's free and i'm using it rather what we want to encourage as one of the principles of dumb development is to borrow everywhere so borrowing has a nice sort of feel that you're acquiring it but not permanently you're borrowing it and borrowing means you're going to give it back so that we want to be able to use open source software to develop it and improve it and give it back to the community so if we could say decouple everything borrow everywhere and benefit everyone that would be a really good idea so uh going back to the diagram here so we've got d for decouple n for midi services and b for borrowing the e for eight um well i've got eight sections around but it's also could be six or four like six you could do sort of two rows of three for the corners you could have two sets of four you could have the four corners and then the middles in the sides or you might uh just the two is the layers so you've got sort of not more than two layers down gives you 500 options should be enough for anything now the the you i would like to explain uh about the you with a story um there is the picture of southeast asia and in yellow in the middle you can see uh vietnam so you've got china and mayan mar to the left and in the 1970s uh it was pretty unpleasant place uh to be and there was the map as it was in 1967 with south vietnam and north vietnam and a demilitarized zone between them without going into the politics of it obviously you know that the vietnamese uh with american help didn't win the war and lot there were lots of boat people as they were called in the 1970s that came to this country and i was working for save the children with um some vietnamese uh families and i met one vietnamese lady who had a cleaning job and i got talking to her and she was saying how she used to be in uh south vietnam a lawyer not not just a lawyer but she was at the sort of barrister level and here she was uh a cleaner and this was surprising until i thought about it and realized the problem was that the knowledge that she had the south vietnamese legal system had just disappeared overnight the country didn't exist anymore and what she knew how to do um wasn't there that the knowledge had gone knowledge is a sort of fragile thing if we have to learn something another example that you will have heard in the news the london black taxi cab drivers have to learn the knowledge to get a license that takes them four years at least and it involves them driving around the streets of london so that they know every turn and can pass tests when so how do you get from here to here they have to know london exhaustively so they spend four years on a moped generally going around all the possible routes learning this and it means that they're very good at what they do however there is a problem modern technology comes along and says oh we've got a sat nav and i can now be a taxi driver and obviously the black cab taxis get rather upset that they've spent four or five years learning this knowledge and it's all gone so what can they do instead well not a lot but think of us as software developers we've got two things we can you know i want to be a good taxi driver so i know how to turn right um i want to be able to drive safely and not run pedestrians over and cause accidents but i'm doing that in london but then next week i'm doing it in paris and the week after i'm in bangkok being a taxi driver and all i need is a sat nav and the ability to drive on the right side of the road that's more how software development needs to work without this vast requirement for memory so that's the oo in uber so i'd like now to apply this with the thing we looked at to start with in your pairs for just two or three minutes have a look at that and say well it's a 12 factor thing but that breaks the rules of dumb dev let's say how can we break it down i mean you might look at sort of development and deployment or in any other way you choose to break it down just see in your little discussions how you would put that into number four on your sheet draw another diagram and put that there and see if you can put those factors into groupings that you can then put more than one in a space and that becomes another level so just do that so just do that discuss it how you break it down um నినాలునాని నినీని నినినిని నినినినిని నినినినినిని నినిని నినిని నినిని నినినిని నినినిని నినిని నినినిని నినినినిని � సూంతిలాని నినిని నిని నినిని సంిని సాన౿నిీనీ మాలు ఆనినిని పథయండిసి నినిట్ట норм సితినిలే ండినిని సిప౤ిలీ<|ja|><|transcribe|> సాథినునునికిసి గారి సానినునిని నినునునాపనిను పనినుని సినునినాపనినును ప్నికిని. మారాలుమారుమాసినునులునునునునినునినినునినునునునునునునునునునునునునునునునునినునునునునునునినునునినినునునునునునునునున భహాటింిని ఆండిట్లత్లి పిన్ల్లినినినానినానిమాలినా సినినినునోనునామినినునినినినినినునినినినినేలునునినునానుని మాన్లి fail into the a long skulle val gun nós కాటానానికానినింటినానిలునినింటినినింటినినింటినింటినినినినినినినినినినినినినినినినినినినినినినినినినినినినినినిని తలిన నిని న౿నిన్నానినినినినీ సినీనినీనినినినినినినానినినినినినినినినినినిన౿నినినినినినినినినినినినినినినినినిని� నికార్టింటికాడింటింటింటింటింటింటింటింటి. పినికికికికింటింటింటింటి. పినికికికికికికికి. పికికికికికికికి. పినికికికికికికికికికికికికికికి. బానికికికికికికి. you have visualising that as writing things in a way that stops you going 15 levels deep in the mind map or having 29 branches. That runs tests in a way that forces the test to be constructed simply. The last one that I've got down there is errors. I like to see what the errors are. You've been in the situation where you get a 500 error nd jxspen d'nex two hours nd nd trying to debug what's happening because nd absolutely nothing tells you what's gone wrong. Is there some instrumentation that we could build on code that made it possible to measure how well you had defined your errors? And the D, the number of Ds I could think of, but that's an exercise for you to think what D would you put in that corner? So to finish off with number six, just could you make a copy of that in your number six square? Just write that down so you can go away and remember it. And number five exercise for you to take home is to think of one of your projects and put that in that format, put a title in the middle and say, I've got a project here. What are the up to eight but no more main concepts? And then divide out those concepts into a maximum of two levels further. Okay, well that's my lot. So time for questions. APPLAUSE Is there any question? I'm sure you've thought about a name for the missing one with the bottom right D. What thoughts have you had on that? Well, the best one I think was don't. Because then that I'm sort of branching it out into another one with eight things that I would say I shouldn't do, but then that would, it got, it's just my personal opinion on things. So I'm trying to work out what are our own opinions about how we do development and what are good rules that we can actually say here is a process that we can actually use to help people write good code. So I found it pretty obvious to put damp death there, so you're then recursive. Wasn't it part of your approach to be recursive? Or... Yes. So every self-respecting acronym is recursive and you have lost two opportunities. Right, good point. Let's put in that correction. Well, imagine that there's a dumb death in there for a recursive definition. Thank you. We have time for one last question. Have you managed to apply it to something you've done? Sorry, I don't know where the screen's gone, but just one last thing to say before I answer that. dumbdev.io, if you go there and click on there to put your email in just to keep in touch if you're interested in working on this anymore. But, well, I've applied it to itself recursively in that this talk is written around that. But it's the sort of thing I'm thinking, what else do I do with it? I don't know if it's worth taking further, but I know there are some projects I would like to apply it to. And the problem is how do you apply it retrospectively when you're working on something that doesn't follow these sort of principles? Thank you very much.
Rob Collins - DumbDev -- eight rules for dumb development So often, we've been encouraged to be smart in our development. "Work smarter not harder!" say the encouraging posters. But the desire to be smart, and be seen to be smart, is hurting. The design suffers, the code suffers, and it's hard to recruit developers smart enough to cope with the problems caused. In this talk, I'm proposing an alternative to being smart: **_DumbDev_**. Let's use our brains for enjoyable, interesting things, rather than wrestling with code written for smart developers. **So what do I mean by _dumb_?** Well, I don't mean 'ignorant'. A clever person can be ignorant of some important information, and learn it. With ignorance, there is hope. I'm also not talking about its opposite, 'stupidity'. This occurs when someone is given the information or advice, and chooses to ignore it. Dumb isn't stupid. Nor is it silent, as in someone who can't speak. Instead, the picture I have is of one of the early computers: very small RAM, disk space measured in KB, and a woefully inadequate CPU. In other words, slow, with very little working memory and limited persistent storage. Hey, this describes my brain -- and I realise that's an asset! I will write better software if I take this into account. So, I'm a **_DumbDev_**, which means I can't hold in my mind the infamous [Plone Site class hierarchy] (see [Michele Simionato's article]). Rather than beat myself up about this, I can say, "Hold on, maybe deep inheritance is a bad idea..." There is some debate about the number of things we can think about simultaneously: it may be 7, 9, 5, 4 or even only 3. We can learn some tricks to appear to cope with more, but most of us can't easily do 38. Here's the first **_DumbDev_** rule, putting a sensible limit on complexity: 1. All diagrams must fit on a Noughts and Crosses (Tic-tac-toe) board**. There are seven further rules for me to explain in this talk. I will demonstrate the benefits of the **_DumbDev_** approach, with good and bad examples. At the end of the presentation, I hope you will realise that you're a better developer than you thought at the start. The next time it takes you two hours to debug a simple exception, you'll know that it's not you. It's because the system wasn't written using **_DumbDev_** rules.
10.5446/20190 (DOI)
So good morning to the people who are with Hangover. I'm sorry because it's late early in the morning. I think it's going to be more like a friendly talk because I nearly know most of the faces. That's really great. So what I'm going to talk about is Plone 5. Who does not know what's Plone? One person? That's great. There is one person. So then I finish the talk and I go to take some beers. So let's go to try to do that. Who am I? I'm Romon Navarro Bosque. I'm a Plone Foundation member. I'm in the framework team in Plone. That's the core Plone team that's doing its meeting every week. It's really boring. And then there is the CTO at the company in Catalonia. Yeah, I'm Catalan. So that's who am I. You can see the measure in Catalonia. So how I'm going to organize this talk. I'm going first to talk about Plone 5 because we are now in a good moment in Plone. We are going to release Plone 5. So I'm going to try to explain you what's going to change there and how you are going to program there. And then I'm going to talk about machine learning because otherwise you cannot include your talk in every Python. So let's talk about. Let's talk about Plone 5. So from the user point of view, from the developer point of view, from the city opener view, from the business point of view, it's mostly quality, user interface, testing. It's really all software. We have nearly 12 years that's been there. We grow really fast. We have a lot of sites around the world for some of the Brazilian government. And it's really being used a lot on big companies, ABB, some other big companies are using that. And why they are using it? Because it has a good quality. It's really stable. You can grow from a really small site that's only for your personal usage for your shop under your flat. Or you can just use the same software for building really a big site, a weak internet, or a big content management system. So everybody knows Plone. So that's going to be boring, because I'm going to talk about what's Plone from the user point of view. So Plone from the user point of view is content. Content is the king. That's the king in Japanese. So what does it mean? That everything is content oriented. You have pages. You have documents. You have folders. You have events. You have whatever you want. For example, there is an hospital in my city that they want to store the passion sheets about, OK, he's ill. He has these problems. We solve that. This is content. Everything that's a piece of a document that we can have in a word, or an Excel, or whatever, or we can have in a piece of text is a document. So content is the king. Content is the center of Plone. And we have full multilingual support in Catalan also. So that's great. It's translated in more than 50 languages. I added this, because most of the CMSs that we have in other languages, like Java, I think, or something like that, doesn't have full multilingual support. That means that all the content can be translated to all the languages, and you have connection between one translation to the other, blah, blah, blah. OK. One of the most important things about Plone is the security. Why is being used by FEI? Why is it being used by the Zilden government? Why is it being used by ABB? Because they know that it's really secure. So all the content that's there, all the fields, everything that's there, is not going to be seen by anybody that you don't want that to be seen. And why is it because we have this security granularity? It's because we have Jope and ZLDB. I'm going to talk a bit about that later. And we have workflows. So all the content goes through different steps, and each step has different permissions for different users, groups, et cetera. Theming. If you were using Plone before Plone 5, 4.4, you surely would quit using Plone, because it was really difficult to use. The curve of entrance. I've been asking some of the European people, have you used Plone in the past? Yeah, yeah, but I quit it because it was so difficult to understand. So we solved that. We solved most of the problems that we had with the Theming. For example, now we are using some kind of rules we call the ASO. It's using, they are being, they are designed so you can see your site without knowing anything, neither about Python, if you want. It's using rules to move things from place to the other. Later I'm going to show. Ah, yeah, we integrated required JS and less. So if you want to build a nice front end with Angular or with V app or whatever, Plone directly provides you the tools that you need to do that. We have inline edition through the web edition. So you can see in the back, you can create content, move content around. This is Mosaic project. It's one spin off of Plone that's still working progress. It's really cool because you can just create content. You can just upload an image through the web with a drag and drop and push it wherever you want in the page. OK, if you want, we can try. Let's see. So this is the new Plone 5 front end. It's really cool because it's using a theme called Barcelona. Barcelona from Barcelona. Catalonia again. Ah. Thank you. Just you can see, I think it's English. So there is some kind of cool things you can browse. All your content here, this resolution is not really good for doing these kinds of things. But you can go to the folders. You can go to the content and see whatever you have here. You can create a new page wherever you want. And just write down whatever you want. You have this tiny mc4 that's really cool. So you can push images. You can push files, whatever, and save it. OK, so then you can go to content. There is a lot of good features that if I would show that, we will still until tomorrow. So just sharing. You can decide who can edit, who can, the history button, where you can see who edited, which is the difference, et cetera, et cetera. Yeah. OK. So now as we are in a technical conference, I'm going to talk a bit more about how it's technically blown, because that was like the commercial screencast. So let's go. URKC. The blown is based on a URKC database, because as all the users are used to have their desktops with folders, content inside that. So we are using a specific database. It's called Zodib. It's really, I think right now, Zodib has more than 10 years, maybe? More than 10, 12 years. OK. So at that time, nobody was talking about non-SQL. And Zodib is a non-SQL database that you store objects in, inside objects. And he stores directly the pickles on a really great structure. If you are not going to use blown, but you need some kind of database that you need to store a tree of objects that are going to have some kind of relation between them, Zodib is a really great database outside of blown also. So this concept of storing everything in a URKC mode, it's really, there is no other CMS that does that. Everybody is storing things in SQL data, but content management is not relational data. If you go to the university and you say, oh, I'm going to store an object and I'm going to put all the HTML in a field of my table. Sorry. You need to structure that. You need semantical. You need to have the option to have sons of your content. You have a folder, then you have content inside that, images inside that. So that's one of the main points of technically talking. But the content is the king. We have this folder structure. So then what are we using to have the content? We are using a project. It's called Dexterity in Plum 5. We remove it, our type. If you never use our types, I'm really sorry for you. But Dexterity is really a cool piece of software because it allows you to create defined content. Oh, where is my? Yeah. With a simple interface, just you decide, OK, I want to create a content. It's called a sponsor. And then you have a choice field where you can have different vocabulary, a little vocabulary that's going to be shown from the UI. You have a rich text. That means it's a tiny mc. It's going to render a tiny mc so you can write down. You can define a urie, you can define different kind of fields, files, images. You can define also permissions for specific fields. I want that. Timo doesn't see that field. So I can write that. Timo will not see that field. That's really cool. So with this simple file, there is no copy of strange thing that we use on our types that only defines really the interface, really the fields that you want in your content. The semantical meaning of your content, you can define your own content type. Then this content type, it's going to be mapped to the URL. So one of the good things also we have in Zoop, Blown, blah, blah, blah, blah, is that we have traversal. That means that all these folders that we have, I explain it to you that you have a folder and then a content and then an image, for example. You can access that through the URL. You can write the URL of your site, the folder, the content, blah, blah, blah. And then the view, a specific browser view, function that renders that and creates a templating stuff or whatever you want to use to see that. Here we have an example of what's a view. So here we have, OK, one of the cool things we added is all this JavaScript CSS management system. So you register CSS and JavaScript, and then it automatically pops up on your page when it's needed. So you can register, for example, a bundle, jQuery data tables. And you want that this view that's going to render some kind of page has this JavaScript included and the CSS that's needed for this resource to be loaded. So then you just add resource on request and you write the name of the resource that you want to add, and it's automatically going to be deployed on your view when you are rendering that. Yeah, there is a lot of technical details. Just don't want to step too much, maybe. Here is how we register a view. I've been asking a lot of people here in Uri Python, what's the worst thing about blown? And they say, this is XML because it's XML. XML is not bad, sorry. It's OK, it could be better. This is XML. It's really great because it allows us to have really a good list of the views, the content, all the definitions that we have. So I really think it's great to have this way of defining all the views and all the content. Yeah, and we have the templating. Yeah, because you can create a class to show you the content, to have your content type there. But you need some kind of class to render that on the view page. I have some kind of templating. And one of the cool things that we added at the end of blown4 is that we can use chameleon. So now, no more tau, no more way of defining variables with a specific crappy language that's only being used by Zoop. So we can define these variables and use the variables with these dollar stuff there. That's really interesting. Interesting. So Python. OK, I think that most of you know that. Zoop is the most Python-based framework for work on content management. You can access the object of your site, and it's everything mapped to the database. Then you can just go to the attribute of your site that's called a folder. And then you can go to the attribute of the folder that's called a document. And then you can access an attribute of the document. So you would really have the granularity and the Python way of doing content management. Yeah, I wanted to talk about Zoop component architecture, because maybe there was been a lot of talks. The last one I saw that pushed on internet was 2005. But it's really interesting infrastructure, and it's being misused, I think, because we have Zoop.interface and Zoop.component. That it's really two good packages for managing three good patterns of programming that, OK. Normally, when you write some Python code, you write a small program. But if your program is going to grow up, you will need to have some kind of patterns for designing that software. And sometimes there is really good libraries about patterns, how to develop big softwares with that. And Zoop is a really good one. I really love it. So where are we storing our components, our adapters, our patterns? It's in local storage, local registry, and global registry. We have two in Zoop, one is global for all the sites of your environment, of your process, of your thread, and then one specific for each site that's the local registry. So I wanted to first explain you, OK, what's an adapter? Adapter, it's one of the patterns that, and software, it's really good for using. Because we are using in the past subclassing classes. So we ended up having our types that maybe you were looking how many classes it was subtyped. And it was maybe 30 different classes that if you go up to see the parents of the class, you will never know what's going to be there. It was a mess. So they decided in Zoop community to create this adapter pattern. And it's an interface, you define, in this case, a person. You define some kind of normal interface, like Java, like whatever. You define some definitions and the attributes of that interface. And you have implementations of that interface. For example, we have an interface called Iperson that you define a function which the share time is wearing. And then you have an adaptation. It's called Catalan Guy. And then it returns a stilada. And if there is another implementation, it's called Bass Guy. It shares a t-shirt. So you can use any of these adapters to adapt an object and get the methods that you need. The subscriber, another kind of good pattern, the observer pattern, the subscriber pattern on software engineering. Here we have the kind of object that we want to look. I want to check if you modify this kind of object, if you modify the Iperson object. Then you can subscribe for Iperson. And you want to subscribe for the modified event. And you decide which function is going to be executed in case that's being modified. And utility, OK, that's just you want to store some kind of list of good alcohol. Then you define an interface, good alcohol. And then you have an object that will return all the list of good alcohol because it's an utility. That's really useful. And then at the end, in this more technical, well-blown stuff, I'm going to talk about the JavaScript and the CSS integration we did. We created our own kit, our own framework, JavaScript framework. That's the worst error we did, I think, because it's going to be hard to maintain. But at the moment, there was no the solutions we have right now on the community of JavaScript. But it was really great. It's using patterns, lib patterns, as a way of defining, OK, you have these elements. And then with data tags on the HTML elements, you can configure them. And they automatically run JavaScript on top of that and renders the correct widget for you. For example, this is the date picker date widget that we have on blow normally. Then you can see. This is the configuration. You define the format, the date, blah, blah, blah. And everything is integrated with required.js and less. That means that if you want, you can start your clone in debug mode. And then you are going to see everything compiled on time on the browser. So you will see the source code of select to there. So you can go and debug whatever thing you need. Because when you're blown, I think that has nearly 1 million lines of JavaScript with all the libraries that's including tiny mc is so big. So if you want to debug that, and you don't have the option to really have them uncompressed, and that you don't need to take care about loading everything. So we created a Python model that's taking care of that, creating the JavaScript configuration for required.js and compiling less. So you get the source code there on your browser. Diaso, I'm not going to stand on that just a way of defining themes. So you go to your designer and say, OK, I need to do this web page. So just design that. The designer draws it. You send to the people who does the HTML. And he does the HTML. OK, the HTML is done. And then you give to the clone guys, OK, this is the HTML of my site. And with these Diaso rules, you just remove. I want that this deep is this deep of the clone. This piece of here is this piece of here. And you can create your theme without doing so hard integration right now. That's also a really cool thing. OK, new things that we are working, that we are going to do in the next month, I hope, is the Plum Rest API. It's really clear that all the web frameworks or the web applications are going to move to JavaScript. All the UI needs to be done in JavaScript. So what's really needed is that we have a really cool rest API that we can interact with clone. So we created Plum for the rest. I think it's not released, but it's going to be released really soon. You can try it on GitHub. It's really great. We defined a way to define this HTTP verbs like put, get, delete, on a specific content type. OK, I have I person, and I want to delete the person. So then I can define a specific HTTP verb for deleting on this object, and which is the function that's going to be executed. So here, you see, for example, the implementation of the put. OK, testing. We improved a lot on testing. We really did a lot of work there. Jenkins.plone.org has the Plum 5 job has 101 acceptance stands, and I think there are more than 5,000 more? 9,000 integration tests and unit testing. We really test everything. If you commit something and you break the bill, then this guy comes to you and starts to yell at you. And you never go to sleep before the Jenkins screen again. So we have a really good testing. I really want to thank Tim and the testing team because they really did a great job on that. Because we have really, really a good testing environment. So that makes also the companies rely on us because, OK, we create a new release of Plone, and you know that's going to work because we have really tested everything. I changed the name of the button of a control panel and 10 test fields. So that's really, really great. And for companies that rely on that and they want the best money to get their sites done and the one that in five years is still there. So that's really important. So, OK, we have Plone 5 beta 3 release. Now we created a specific repository. So if anybody wants to try it and hit things, OK, it's going to be difficult to try Plone because it's a complex software. OK, it's one, two, three, four lines of on-soul to try that. You go just clone the jithub.com.plone jamming2. You run with Python 2.7. ZODB is already running on Python 3. So we think that we are going to someday to have options to have Python 3 and all the stack. It's really big. So we will try. Then we run without, and then you run instance fg. And you have your clone site running. It's only four lines of code, four lines of console. You need to go to take a coffee between the build out and the instance because it might take 10 or 15 minutes, depending on the machine and the network. OK, just one, two, five. We have a really good documentation. There is training.plot.r. There is docs.plot.r. Hundreds of different documentation written for developers and for users. And lots of training. There is a training that helps you to understand. OK, the future. We are going to move, maybe, to Azure to REST API. And we need a JSON front end. We are going to have some kind of a sync I.O. back end. Maybe with SubstanceD, maybe with pyramid. Who knows? We will see. We are going to talk about that. This talk also has the vn machine learning. And they sent me 10 minutes. So I'm going to talk about machine learning also. So what we did in clone is we created a proof of concept because our main goal, the goals that we had of using machine learning is that, OK, the user has used a CMS. So when you get to the content, you want to see, for example, the related content that's on your site. But the people doesn't go to see, oh, this is related to this one. And this is related to this one. Because nobody does that in the content management system. So we wanted to create a smart way of that everything is related between them automatically. Classification of the current content. So if you have your upload site that the people has already targeted the stuff there, then you won't just maybe to run some classification algorithms on top of that to just classify what's added on your content, on your clone. Recommendation of widgets and tags. That means that if you're creating a new content and you are editing the content, having a way of user feeling that it's suggesting you, OK, this content talks about that. Just asking the user if he wants to tag this content with this kind of subject. And semantic search on the live search. We have a really nice live search. So you can search in Google and get the content that's there. But it's completely text. It's looking for text on all the files that we have and all the content that we have. So we're trying to make that it to have semantic search. So it's related to stuff. So it gets more useful. So I started to try to understand what's machine learning. It's a completely big subject. I found one really good picture that I really love and I wanted to show you. This is Seiki Learn, a map. So you go from a star and say, OK, what I need to do? I need to. I have more than 50 samples. Yeah, I'm blown. Normally you have more than 50 samples. OK, I want to predict a category. Yes, I want to predict what's talking about. OK. I have label at data. No, because people doesn't have label at data. So then we are going to go to yes, but now we go to no. So we know how many categories we have. We don't know how many blocks of content is going to be in the blown side. But that needs to be defined for the administrator because we are not going to try to predict that. So then we are going to use kimmins. And kimmins is one of the algorithms that we have implemented. And I'm going to show you if you have that. Then in case that we have label at information, we have another branch. This is a really good Seiki spreadsheet that helps you a lot to decide which algorithms you need to use for different kind of usage. Seiki Learn maybe is not the best option, but we wanted something that is integrated in blown and you don't need to run anything else external. There is also implementation with Jensim. That works fine. But you need a external REST API. You need to export the content. And our clients are really specific about security. So they don't want that we export the content to any other external application. So we need to have everything embedded on the security of Jop. So we implemented this, Collective.Machine Learning in our Github. Das Clustering is one of the initial one. So what we created is an adapter. It's called iLearningString. You get from that adapter from this content type is iPerson. And you create this adapter. So this person, I want to get which text I want to use in the machine learning stuff. So maybe you want to join the name, the first name, and his birthday or whatever. And then you create a text line, a text that is going to be used for analyzing. Then we normalize it. We vectorize with NLTK with feature hasher that we get the corpus of each document. And we store that corpus on a pickle on the database. So we can reuse later. So it's not expensive later to use that. We clusterize. That means that with all the corpus that we have, we created the big metrics of all the things that are on the site. We tried with more than 150,000 documents. OK, you need 64 gigabytes of RAM or 32 gigabytes of RAM to run that on memory. If you want, you can just use the batch stuff of Kimin's. But this is what running on a single process. We try to use just memory. There is an algorithm that allows you to have batches, Kimin batch. So we clusterize that. We define the numbers of clusters that we want on the front end. And then we just use the Kimin's algorithms to decide groups of content. And we store this model, the model that we get on a pickle on the database. And then we use that model to predict in which cluster is going to be the next content you create. You create a new content. And then automatically, we'll see, OK, this content belongs to this cluster. Yeah, the good thing is that all this implementation has security implicitly because the ZODBs has its object and the catalog is secure. So nobody's going to get any information that doesn't need to be seen by them. And OK, in the future, we are now working with Naive Bayes for classification, recommendation, semantic search, and external. We want to be able to extern to push the computation outside of clone. But it's a bit difficult with a security issue. And I wanted to show you because they say, oh, maybe I'm not going to believe what you're saying. OK, so this side has a source in Catalan because I was doing this morning and was a bit sleepy. So yeah, not like that. So here I created, for example, some content, a lot of copy based content here. And if I go to any content here, I will see that it belongs to cluster one. This side, let me see. Where is my pointer? Yeah. So there is a sign map, automatic sign map, that gets and classifies all my content in different clusters. I'm creating content and it automatically defines the groups of content that's semantically close to that. So how it's configured, it's really nice, this control panel, new stuff. Victor did that there. So let's see. Yeah, machine learning settings. So here you define one of the first systems you're going to store the vehicles. You want to use an SDK steam, which are the stop words that you want to use, the different grades that we have. If you want to remember the hashing of the strings, so they are being restored. And then we define how many clusters we want, the maximum number of clusters, and which is the name of the pickle that we're going to use to store the clustering. And then you can press this compute button. And this compute button, it's getting all the content, processing all the content, steaming all the content, and creating in the catalog of the specific indexes that will allow us to, in the browser, see which object to which cluster it belongs. And we created this nice view. Yeah, this nice view is called autosite map, and automatically shows you the content grouped by this kind of cluster. And then it's a matter of each cluster you define what's about. And if you have real content, it's really easy to see that, because you see the titles. And OK, this cluster is talking about this kind of stuff. So you can name that cluster, and people will see all the content automatically pushed. So this, I'm going to return back to the talk. Yeah, we have a feature on this. This is a proof of concept we are using in production from some sites, because it was needed. But we are working really a lot on that, and we are sprinting on that a lot of times. So if there is any data science people that wants to help us to understand what's going on behind all this stuff, we are really happy that they help us. OK, Plum Community. It's a cool community. If anybody wants to join the Plum Community, we should try now that it's much easier than in the past. You're welcome. We are really cool. And if there is any questions, we're done. Thank you. Thank you for the presentation. Just one question. How does this handle content in different languages? Yeah, right now the steaming that we have uses English. And we need to add in the control panel drop down that allows you to decide which language do you want to use. The steaming is implemented in most of the languages. So it's just a matter of that we add in the control panel this option. Could you give us a few numbers? How long the computation takes? If you have, like, say, 100,000 or a million objects in a city to be? We do that at night. It starts, I think, at 1. And it finish at 4. So three hours. In 150,000. OK, so you most likely have a specific instance for that. So it does not block the other ones. We have a specific. The only problem is that when it's finished, the computation needs to write down on all the objects the cluster. And that's what takes more. Writing down in all the objects, which is the cluster that belongs to. What's the size once it's done the computation of the pickle on 150,000 documents? And is that read into memory with the sitemap generator on every page request? Or do you cache it somehow? Sorry, I didn't understand. Sorry. So what is the size of the pickle that's generated at the end of the computation? And is that read into memory on the autositemap on every page request? No, no, no. We are storing the model. And we are storing the vectors, the matrix, the corpus. We are storing the corpus. And we are storing the model. So we are storing the model because when you create a new content, you want to ask to the model, which is the cluster which it belongs to storing the attribute. And we store the matrix. So if you want to compute some more times, you don't need to recalculate everything. So everything can be cached. It is normally five. When you are using the site, when you are doing any request, nothing is computed. Because everything is stored on the catalog and on the attributes on the objects. So it's not real time machine learning. So it's more like we run the algorithms and we integrate with blown. Do I have more questions? Is there something that can do real time machine learning? You should talk with some data science. OK, because I mean, I know like solar or a lesson search or things like that, and I don't think that this is possible because it takes some time to build all the indexes. And I guess for machine learning, that's the same as true. And then you need incremental indexes and algorithms so that that might be hard to do. I just discovered talking with some data science from the platform. In Python, there is some kind of line of investigation. It's called online. How is it called? Online? Online learning, yeah. So that are the kind of algorithms that you grow incrementally. And it's designed more for online applications. So we need to investigate that line of usage. Some more questions? If not, it's lunch time. Let's thank this speaker again. Thank you. Thank you for lunch.
Ramon Navarro Bosch - Plone 5 and how to use machine learning with it. Plone is a Document Management System and Content Management System that has been in the Python scope for more than 10 years. Plone 5’s features allow us to manage content, define various kinds of content and provide a nice useful UI to work on semantic web technologies. In this talk we are going to explain our approach for using Plone with the Python machine learning toolkit sklearn to enable clusterization and classification of content using a scalable content management system. We will also add some sophisticated front-end gloss using the new functionalities on frontend development added on Plone 5 and some real use cases of CMS/DMS with machine learning using sklean and solr.
10.5446/20188 (DOI)
Hello. First, I wanted to tell you, you can automatically check if code is good. And I will tell you what we can do. This is called Quality in Python. My name is Erdos Foggan Tarek. Let me tell you a bit about math. Are there any mathematicians here? Okay, no. Okay. So we will go fast on that. So there is a theorem called Rice theorem in math that has some very obscure definition, but in short words, in words that normal human can actually comprehend. It means that we can't automatically verify if a program has some non-trivial property. So, for example, we can't write a dictionary when you will check if your program is correct. So you can't get a dictionary when you will search for your program. My web application is right here. It's correct. All good. You can't write such a dictionary. You also can't automatically verify if program never fails. If program returns some data, if it is well written, whichever it means, or the WTF factor. Because you can't do this. But you can try. In fact, all we do is try with all our automated tools. We are just simplifying this task by adding some boundaries and some limits so that this task is easier. So don't worry. It might have been harsh, but it's going to be okay. We can do some things to check if program is good. Okay. So I will introduce you to many tools that I use, or at least tried, about tools that are used for checking or checking programs in Python or making them look more beautiful, like formators, import sorters, and also a bit of the code coverage. There's a small summary of what is this talk going to be about. Are there any managers here? No mathematicians, no managers. I don't get you. Okay. So, okay. That's right. So I will be talking about developer perspective. So how do we can use these tools? How we can improve our code with using them? Because, of course, managers like charts, like the persons, and like to see that they grow or drop in some cases. But we developers are more practical. We want to see how it changes our code. Okay. So let's begin. Okay. So here is a graph of various checkers. You probably know all of them, or nearly all of them. There was talk about run on yesterday. Yes. So this graph presents how these checkers are contained in each other, meaning that checker that contains other is checking what is the smaller checks. Yeah. So now we see that we have one big checker, PyLama, and smaller ones. I will talk about all of them. Okay. So let's start with pet eight. Who doesn't know pet eight? Raise your hand. Oh, that's bad. But probably you are not programmers. Okay. Yeah. So pet eight is a Python standard for how code looks. So it's just about how code more or less of looking and using proper idioms done of the semantics. So pet eight is a checker that in most cases you can afford confirming all these rules. There are no rules that you will find difficult to use. Yes. So that's a style checker that's the must have of all the in most cases. Yeah. So next one is PyFlags. PyFlags is a smaller checker, even smaller with pet eight. But it checks many, many nice things that you would usually use spying into, but PyFlags is faster. It's checking for unused variables, unused imports. So if you have a guy that when he's creating a new module, he's copying the wall of imports from another file, you can check it with PyFlags and actually give him PyFlags and say, hey, remove these imports. Okay. Next is Maccape. This is a very, very small tool. It's just checking the complexity of functions. And what you see here on the screenshot, on the shell shot, in fact, are the complexity for functions. And you just assume which level of complexity is okay for you. For example, here we have 18 levels of complexity. So that's very, very bad. Okay. And Radon. Radon is like on this graph a bit off of the whole ecosystem because Radon is more like a metrics tool. So a tool for managers and tool to see the big picture of the project. And so if we are going to look if overall project is doing good or doing bad, we will use Radon. But I think it's no use using it every day. Yeah. But it also counts the Maccape score and other metrics. Okay. I told about PyPy and PyFlags and here is Flake8, which incorporates PyFlags and Maccape. So this is a tool that in most cases it's enough to use it if you don't, if you have dirty things in your code base. I will talk on this later. Flake8 is also like PyPy is a tool that in most cases it will be okay to follow all these rules. And here comes PyLint. Who doesn't know PyLint? Okay. So PyLint is a tool for analyzing Python code for a bit more, in a bit more, there are many more violations checked, not just the PEP8 or something like this. There are lots of, lots of things that PyLint checks. PyLint is very tor-oftened. And that's its blessing and the curse. Because if you are, for example, creating a class and later importing it via some import D magic or something like that, PyLint will not like it. If you have a class that generates its fields, PyLint will not like it. It will say that field is not defined in the class. Why are you using it? And you are using it because you know it's, well, because you know this field is there. So PyLint is a tool that you can't just use without configuration. You will always need some configuration. There are modules to ignore some common things that happen in Django and Flask. But PyLint is also very, very useful because it also points some places where you need to refactor something. Where the function is too long or class has too many public methods which just, or has no public methods which in fact should not be allowed. Class needs to present some interface. And PyLint is pointing it out. It sometimes is relevant. Sometimes not, but always gives something to think about. So I recommend PyLint. And here is PyLama. PyLama, as it was shown on the graph, it just gets all the checkers that I was talking about and runs them. And yeah. And PyLama, it's easy to think that, wow, we'll just install PyLama and it will do it all. But if you want to configure all these tools, if you want to install some extensions to them and you want to have some specific settings, it's difficult. You have one big configuration file which for some people would be good. But for some people would be a hell to edit it and search for some things. I personally like smallest tools I can actually have. With the example of Flake 8 because PyFlakes is not as big to be treated as a fully separate tool. And PyFlakes are integrated very tightly. So there are also other tools. The tools I was talking earlier were a classic tool for static analysis of Python code. But here are some other tools you can enjoy. PEP 2557 is like the document. PEP 2557 is checking if your docs links are valid. So it's very thorough. You can see it's going to say that you don't have a point, sorry, a full stop on the end of the sentence and many other and enforce that the summary line should be one line and many other things. So PyLint only tells that the docs link is missing. But if you want to be a docs link, PEP 2557 is something for you. Okay. Vulture. This tool checks for that code. For code that is not used. And as I said, when I was talking about PyLint, it can always fall in a trap of class that is defined but used by some magic or some attributes that are set with magic methods. So with Vulture, you need to be extra careful because it doesn't have many configuration options. So you just can't ignore all the things you don't, you know will fail. It's just between art and science checking which functions are really not used and which are used but it's not obvious. So I recommend that you do that. But you need to, you can't follow it blindly because it will not end well. And last of the additional tools is iSort. iSort in its default mode sorts the imports in all the files he can find. But it can also check for imports. Check if the imports are well sorted. Yeah, that's not very meaningful. But if you want more meaningful results, you can install a plugin for Flake 8 which makes some real hints how to sort imports. Okay. And to some of these tools, you can write extensions. In Flake 8, it's all, it's, okay. It's also, it's usually it's an extension with another check for doc strings for imports. In the piling, it's usually something that makes some projects specific changes like for Django, Flask and so on. And to pile them, you can add new checkers. But when writing an extension, you can choose either to write, to analyze a row code or abstract syntax tree. Who doesn't know what are abstract syntax trees? Okay. So when you have a Python program and this program is parsed and the special words in this program are changed into tokens. And from these tokens, you form an abstract syntax tree like abstract syntax tree if under this node is the logic value, the command for then and so. So it just is the program, it's your program in the purest form. And you can see here an example. It's the code for the most relevant part of code for the flake 8. So we can see that it just retrieves the graph and goes on this IST graph by every node checking some thing. It, in fact, here it's simple because the path graph IST visitor gives the complexity function. So it's here it just counts the max complexity. Okay. So next thing, form matters. So if you don't, if you're lazy and you want your code to look beautiful, you don't code Python. Because Python requires you to work with all these white spaces. And if you are not respecting it, you're not coding Python, in fact. So yeah, but there are some tools that are to help but they want to do all the job that you should do. The first of them is AutoPep. AutoPep, I don't recommend because how you can see there, it fixes, it fixes the violations, not all of them, but leaves very ugly line breaks. So not. It's not that unless you want to check and repair after it. YAP. YAP is a rather new project from Google. And in the docs, you can read that YAP is also not meant to do all job for you, but it's meant to unify the style among whole company. For example, they told that they wrote, in fact, that you can add a pre-push hook that will format your code with YAP so that the format on the master branch is always formatted the same way. Even if you have your personal preferences, YAP will do it its way. And I sort, I was already talking about it, it sorts imports. Okay, next thing, test coverage. Who doesn't know what is test coverage? Okay, test coverage is about that. You run all the tests and in the meantime, you check which lines in your code were executed so that you can be sure that all your code is executed. Coverage is a module that you can use also in other cases. For example, if you are running your application just to see if the code is used. But its most common use case is just for testing. And the common use case for using coverage is that you check coverage on your branch if you are working on branches and Git, for example. And you compare this to the coverage on the master branch so that you don't do worse than it was before. One thing to remember about coverage is that the extensions to knows tests or pie tests with coverage sometimes don't have the good results. And near the end of the presentation will be shown a better way of running coverage. It should be just run before the actual test runner or just run test runner inside the coverage, not the other way. Okay, and other articles that you can use, Div Cover and Div Quality, they are tools that are running, you give Div Cover and coverage XML file and it checks if there are some missing lines on your branch but only on your branch. And Div Quality works a bit the same. You give it a checker and it checks but shows the violations only on your branch. Div Quality supports only a limited set of checkers but you can add new. Now it's not possible by extensions or configuration but maybe it will be in the future. Okay, GitLint is also a small tool like Div Cover but smaller because it just runs pie lint on your branch compared to the master. And if you have a colleague that leaves orthographical mistakes in code, yeah, this tool is for you. You can check if there are any errors in your code like what we have here, absolute, absolute instead of absolute. Yeah, so you can catch these things. Okay, so a couple of words about automation. Who doesn't know TOX? Okay, so if you don't know TOX, you want to know because TOX is a marvelous tool for running tests and it handles its own virtual hands. So you just don't need to bother about anything, you just configure TOX and run it and it's done. You can also enjoy some plugins to PyTest that check pie lint. And what I recommend and what I use in my project, we are coding on GitHub and Jenkins has a plugin to build every pull request. In our case, it means that this plugin for every pull request is running the test, the pie lint flakeate and if the results of tests, if the coverage is lower or if the pie lint fails, the pull request can't be merged. So I recommend. However, you might not want to run Jenkins because it's Java software, it's big and it's consuming all the RAM it is given. So maybe you can use Travis or another CI tool. Okay, and it's an example of configuration of TOX. You just give what Python versions you need to check against and you write commands. Here we see just it clearly that coverage run minus M knows and so it runs knows inside coverage. It generates XML fetches the latest master and runs DivCover, DivQuality. It's an example from the DivCover project. You can find it on GitHub. Okay, so a couple of words about why. I probably have one minute. Okay, but in most of these cases, you just want a uniform style. You just want that when you go to some module that nobody worked for a month and a man who worked on that lived the company, you want, when you go there, you want to be in the environment that you know. And now one minute, thank you. And PyNint and coverage and other tools give you that. Also coverage, test coverage, if you have high test coverage like 90% or better, it gives you confidence to refactor because if you refactor and you break something, you will know about it because all the tests will run and they will say what's wrong. Okay, if you want to, if you don't have time to do this, you can go to managers with these buzzwords. So, maintainability, readability, extendability, it's always better. Productability also, yes. So, as a recap, check your code, run tests, run coverage, all you will find, all things will go terribly wrong. And are there any questions? We have time for two questions. One in front and then we'll go to the back. Great. So, okay, first of all, thanks for showing Palama. Great, but the one point I'm missing from your presentation is SonarCube and it's a plugin for Python. Well, let's be silent about checking rules it uses because it's its own custom Java implementation of analyzing AST trees. But the cool thing about this is that it maintains a list of issues in your code. Yes, that's helpful. But SonarCube is running only piling. So, and a couple of custom checks, but that's not enough, at least for me. Yeah, you can use it and if you have enough RAM to run SonarCube, yeah, go with it. Yeah, it's very nice. It creates issues. If you actually want to have every violations as a separate issue, that's the other thing. Okay, but if I may, it wasn't actually my question yet. So, I would like to ask you if you know any other tools for, I know, virtualizing or maintaining a list of issues that come from Palama or other tools you mentioned. Because you said you, I bet you use continuous integration and so on. So, in my current projects, we just had a master job on Jenkins. We checked master with all the rules we really want to be followed enabled. And if someone had some free time in the end of the sprint, he just was taking some of these rules and fixing them. Some of them also required some serious refactor. So, you can fix all the piling, sorry, all the pet aid and pipe lake violation. For example, I hope in one day, if it's not a Gargantwijk project, but for the more severe refactors, you will need to really think of what needs to be done and the bug for this won't be just fix violations here and here because it might be a bit harder. Okay. Thank you. There was one more question. Okay. Yeah, my question is if you have some good tip for enforcing the rules in a team of five or more people without being too rigid about, for example, the piling rules and stuff like that. Without need for what, sorry? Without being too rigid without having a push hook which refuses the push. If you have some other, because in my experience, that's like a big problem. You have rules, but always you have people and circumstances and other problems which when you break the rules. So, if you have some good tip how to avoid that. Okay. My approach was to enable the rules that are checked, not all of them at once, but the ones that can be enabled. And we weren't using piling and flag aid from the very beginning of the project. We just started using it with the core prep aid and later we expanded it with consecutive rules from flag aid, from piling. And I think that's the path. That was my path and it worked. People weren't angry about this because in general, people are most angry when you make them fix someone else's mistakes. And if you have difficulty, they will just need to fix their own mistakes. So if you don't, if you will talk with them and tell that you are going this way and they will agree, so it's going to be okay, I think. Okay. Thank you very much, everyone. Thank you, Radislov.
Radosław Jan Ganczarek - Code Quality in Python - tools and reasons Beginner's guide to Python code quality. I'll talk about the tools for code analysis, differences between them, extending them with new features and ways to running them automatically. In the end, I'll talk about reasons behind all of these tools and try to convince you to using them in your projects (but if you are against it - I'll gladly listen to your arguments).
10.5446/20185 (DOI)
Thank you for joining me on my talk on Python's world of mail order and retail. So my goal is to show you how real world application can use Python in a very simple way to produce value for a company. So let's start. Okay, agenda is I will show you two use cases we had in our real world application for replenishment solution. Then we've got a simplified use case. So what's the essence of the real world problems? What do you need the ingredients? Then putting it all together like in a cookbook and what do you get at the end? Well, the framework is a solution and it's simple replenishment. So everything is GitHub. You can download it. It's a presentation. It's a notebook. The examples of the framework. So everything's working if you like. You can look into it. So the first case we had is in a big job, a mail order company. We want to estimate what people are going to order in the future. So right now the process is that it takes about five to seven days after your order you get back your item. Let's say clothes, multimedia stuff, whatever. And, well, we want to minimize the delivery time. So what do you have to do? You have to think, okay, you can order the right amount so the customer gets his items earlier, like Amazon Prime and stuff like that. But we're talking about about 100,000, 200,000 items in this mail order company. We've got personality, the delivery time of goods does vary, but we are only getting an estimate from our customer that's for about four days from order to delivery and, well, there are a lot of, you see a lot of items, we've got a lot of slow selling goods that's quite hard to predict. And we've got all the returns from the customers. So if you order something, you have to keep in mind, okay, I'm ordering, but I'm getting all the stuff back with some delay about one, two, three weeks depends on the product group. Well, it's one case. So the other one is retail. Let's imagine a supermarket where if you've got some amount of goods which is sold every day, let's say meat, milk, stuff like that. And, well, you have to order the correct amount to minimize auto stock situations. Auto stock means your shelf is empty and you have to handle as well as the excess surplus of goods in the evening because we're dealing here sometimes with expiry dates. We've got weather, we've got seasonality, we've got special events like soccer championship. We take care of everything and we're talking about some 100 items in some 100 locations. So you've got a multiplication here. In the mail order example, you don't have locations. It's only one location. And here you've got, usually you've got fast moving goods here because milk and bread and stuff like that's sold in amounts about 100 per store. Okay. So what is the essence of this? Okay. You have to know the demand and stocks for certain periods of time in advance and you have to know the frequency of orders. So, well, order is the right amount of items considering the conditions, the boundary conditions. You have to calculate the correct amount of an order. So, well, what do we need in a framework? Well, my talk covers more or less the prediction, simulation, replenishment module and the testing. Everything else is necessary but I had no time to really put it into the framework. Input output is usually a big deal. We are dealing with companies that say to bring the data to us. Sometimes it's corrupt. Sometimes we've got problems with defining interfaces and also output interfaces are quite difficult sometimes. And well, I just cook it down to CSV files. So it's no database and nothing else, and nothing in it. Just CSV files. I will talk about predictions but only very, very low-level predictions like moving average or rolling mean. And in the replenishment, I will make some assumptions so the calculation is easy and you can follow the formula. Well, in testing I will talk some minutes about pie tests and well, I've written some unit tests for my functions I use in the framework. Well, plotting, logging, documentation, deployment, reporting and monitoring is also very, very important. So keep in mind if you have this list and you think about it, then you've got a minimal set for a framework. For predictions, well, we need to know tomorrow's demand or another period of time that's a two days, three days in advance. And I prepared some easy model examples like take yesterday's sales, take the last week's sales of the same week, take, well, I left it out, rolling mean and moving average. You can use gradient boost or whatever you like from scikit-learn. The order calculation. I make the assumptions, okay, we are just, we've got a stock count in the evening of zero. So we're throwing everything away each day because it's easier to predict and make an order forecast for items that are thrown away each day. So I don't have to do a stock simulation, a complicated one. My devant interval is one day, so everything gets delivered after one day. Well, here I've shown, okay, you've got the stock. You know the stock in the evening is zero. You will calculate an order for that. So you know, okay, my order is coming in tomorrow and I've got a demand to fulfill so the customer's happy. And well, then the order is the demand tomorrow minus the stock in the evening and the stock in the evening is zero. So the order is the demand. Very simple case. Okay, for reasons I'll explain a little later, I'm doing a simulation. So I predict the demand for each day on a test sample. So I have two cases, take yesterday's demand or I calculate the rolling mean of the last five days. And then I calculate the order for a given strategy. For example, take the expectation value of your prediction or more complicated, take a quantile of assumed probability density. So let's say usually you can assume that goods are sold like in a personal distribution or something with more difficult gamma personal distribution. And you can calculate this distribution from expectation value you forecast and then take a quantile of this assumed probability density. Okay, let's switch to a notebook. Okay, I've got some simulated data where there's a simple data generator available in the repository you can download. It produces data, a time series with a personal distribution between one and 10. You can put in a number of items while I took about 100 items. And well, let's see. It's coming out. So I'm plotting one product. So that's the whole time series in a histogram where you see, okay, we've got some very, very low number of zero sales. Then you've got about 40 sales of one and so on. And well, you see the mean is about four here. So that's a personal distribution with a mean of four, I suppose. And I can plot a time series of this product. It looks like that. Here you see I simulated a time series from 2014 till the end of May of 2015. And you can see, well, the sales are fluctuating between four and 10 more or less. And you can plot also the sum of all sales of all products. So I've got about 100 products and the mean is about five. So well, you see some personal fluctuations each day. And I can plot the mean for each product. So my product ID goes from one to 100 and you see, okay, each one has a mean between zero and 10. Okay, let's go back to the talk. Okay. Well, I've written some demos on simulation where you can do some different strategies like order one, order 10, orders expectation value, order the quantile, take different predictions. I can show you in a shell. So simulation will take about 30 seconds and I'm giving it a config file. That's Windows. And I can also look at the configuration file. And we can have a look at the configuration file. Well, here I've got some configuration file I'm giving to the program and now it tells the program, okay, simulate quantiles from 10 to 99 in some steps. Takes a model, simple prediction, Windows zero. Okay, this I show you the code for the prediction. So Windows zero means nothing here. You've got a start date and an end date for the simulation and replenishment rule you want to use. So that's what I call, what I call strategy. Okay, let's see. We've got the simulation. We've got the simulation code, so small as reading in data frame, takes some arguments and evaluates a prediction function and gives the arguments a prediction function and does the replenishment and calculates the prediction function. And it's a good example of how to do that. So, let's say, you have a simulation code and gives the arguments a prediction function and does the replenishment and calculates what's too much and what's missing and writes it back. So you've got a rate as a result of this calculation for each product and each day. Okay, and the prediction is in here. Well, I took, right now I took the simple prediction. That's just a shift that, well, I've got sales in my data frame and it shifted by zero days because the shift value is also zero I'm putting in and that means, okay, I'm at the evening and I know, okay, I have nothing in stock, but I know the last sale, for example, five for this product. So my guess for an order expectation value is five for tomorrow. Okay, so it's, it did run and we got some, call it AVMR here, it's 14.51. Yeah, I got a result. We can have a look and it's, yeah. And well, we've got index and surplus means, okay, how many items did I have sold and what's left in my stock and it can be bigger than one that means I've ordered more than I sold. It's bad. I've got an out of stock rate. That means, okay, how many, how many days has this item been out of stock and it's a mean of all items. So for low quantites, it's 0.8 here and it's getting bigger quantites, it's getting quite slow. So you're ordering a lot and you're ordering a lot. And you're ordering a lot and, but you've got a lot of leftovers. Okay, so well, you can do it for every of, of those combination prediction replenishment and you're getting curves like that. So you seen the output and well, here I wrote down the definition of access to the stock. So I'm going to go ahead and show you the, the, the, the, the, the, the, the, the, the the, the, the, the, the, the output and well, here I wrote down the definition of access rate and out of stock rate so surplus amount versus sold amount and item days out of stock versus all items times days combinations and we're getting a working curve and depends on the cost, what, what's the customer that's one well, let's think, okay, I simulated here some expectation values that would be the usual what has been sold yesterday, then you're getting a value about, well, out of stock rate about 50% and 25% access rate. But you can do better. We can take the rolling mean for example with the window of 5 and you see this yellowish curve. I hope you can see it. And you're getting at the same out of stock rate for about 40%. You get about 18% access rate. So this strategy would benefit, your customer would benefit about 6% points lower surplus amount. You can choose the working point for your replenishment solution. You can put it in the config file and well you get replenishment. Okay, I did this with some palisation because I didn't write it in a very optimized way. So I thought okay I can parallelize it so it's getting a little bit faster and it's quite easy if you use multiprocessing standard library of Python and well I've got four cores on this laptop and well and the result is map of function. The function is the simulation wrapper and I put in a list of values. The list of values are my quantiles. So I parallelize each job is calculated for one quantile, one given quantile. And our projects of this mail order commonly we're using Redis for calculating it on multiple hosts because we're dealing here with let's say so 650,000 products we have to simulate for about 25 quantiles and over time period of 60 days. So it takes about one hour with 90 CPUs and the code is a little bit more complicated and it's a little bit optimized for speed. Okay testing. Well you've seen I've got some functions in my code. I've seen the predict functions here, three functions. I've got in the simulation I've got as well some functions. Here I've got the simulation per quantile and I've written some tests and the tests are in tests. It's just very simple assuming some data frame putting in the data frame the function and testing the result and I'm using PyTest. Just type PyTest and it looks for all the test files in the directory. I've got three. I've got ten unit tests and they're all passed and I can put in I'm using also PyTest coverage so I can say okay give me those two directories and test for how many percent are covered. I think it's two. Go watch. Okay so it's written out an HTML file. Can I look into it? HTML, Cuff and the index. Okay so I've tested those two directories, the replenishment and simulation directory and you can see the test coverage is 99% sounds good and we can we can have a look into it and okay you've got the code and everything is green so this function has been tested. Well usually you don't test every case but you can see you can test every function here. So start simulation is tested in the init files it's just for importing there's nothing in it and the simulation is tested. Well I didn't test the wrapper. You can see it here. It's red. This function is not tested so I've got 96% of coverage. One missing, 25 run. Okay and I'm testing also my config. Well I'm using the voluptuous package on the PyPy server. It's quite nice if you want to test config files. So as I showed you earlier I've got some config files like the simulation with the diction and I've written a validator while I pass this config first. It's in YAML and then I'm testing it in the config. The validator and right here it's okay I'm loading in importing from voluptuous I'm importing the scheme, object and so on. Look at the documentation it's quite nicely documented. And here I'm defining some lists. So a list of ends for testing my quantiles and then I've got some check for dates and here the scheme and I'm testing for the scheme. So the quantiles have to be in between 1 and 100. Okay 100 is not in the range so it's 1 to 99 and model has to be string, the window has to be end. Okay this tool has to be a date with a certain format year, month, day and an input file with string. Okay let's test it. Oh I hate windows. Okay I can put YAML in it. For example simulation and type a simulation and nothing happens. Okay everything's fine so let's change it. Let's go back. No. Go to simulation and I put in a string into the window. Save it and run it again and I got an invalid message. I couldn't put it nice it really got a nice exception. Okay here extra keys not allowed in data prediction start date. Oh wine start date. Actually it's before that. Interesting. Sometimes I don't know what's happening. And now it's working. Did I? Ah okay that was an error. Yeah now it's in the window. Okay so. Okay that was the config testing and the pie test. Well replenishment. Well it's nearly the same code I use in the simulation. It's just a little different. You don't scan over a range of quantiles. You define your working point from the nice curve I showed you earlier. So here I take a 60% quantile. So I hope I'll get an out of stock rate of 33% and excess rate of 23%. So that means in two of in 66% there was two of three days the customer stands in front of an empty shelf and customer was a grocery store or something like that has to throw away about 23% what he's going to be about he's selling. So that's quite high but I'm throwing everything away each evening. So that's really a worst case scenario. Usually you've got in grocery stores you've expiry dates for a week or two or three days depend on the product. So the code is nearly the same. Let's go into the code. And it's just called order. I've got a model. I've got a prediction model. I've got this punishment rule. Same as in simulation and then I'm doing order and its predictions and I apply a lambda on it. But first I'm calculating a prediction and then I've got the prediction which is my demand for the next day and I apply a lambda on it and it's a replenishment function and the replenishment function is defined in replenishment rules. So I can give it in into the config and let's say okay order one, order 10 and it's not good. Order let's say the expectation value of this prediction well it's one of the predictions and here it's the standard replenishment I'm doing so I'm taking password distribution, put in the quantile and the prognosis, the forecast for the sales is lambda and well the order then is just sales minus stock, stock zero and open orders is also zero because I'm just ordering for tomorrow so it's just now incoming goods and I'm doing a stochastic rounding and it's all. So let's run it. I'm not so ready data. Okay so the input file is config replenishment and order day is for example 2015. So it's just an order for one day and put it out to test. Okay so it takes about five seconds to run this. So we just calculated the orders and usually you're putting that into a database and then you are you putting it on an output file and the customer retrieves the output file and does an automatic order so that's the optimization. Let's have a look at the order. Generated. It's called test. It's just a simple CSV file but okay with the date a product ID sales versus the truth usually you don't know that and prediction and order. So my prediction is six and I'm ordering I think the 70 percent quantile so it's a little bit higher because it's above the mean and you see okay you've got some orders. Can have a look into a file I prepared earlier in a notebook and orders read in the order and plot everything into histogram. You see the order is distributed between 0 and 17 and the prediction is a little bit lower because that is the average and what's my average order for this day it's six so my mean is I'm ordering about six items per order as some six amount of six per item per day and while my randomly simulated data has more less than average of five so it's not so bad. Okay what is missing I didn't talk about logging but I think it's quite important if you've got a big project a lot of stuff going on you want to analyze what's wrong because the system is usually quite stable but sometimes something happens like the customer sense corrupt data or you've got memory which you didn't expect and stuff like that and then you need a logger so everything's covered in the standard library of logging you can use gray log and everything something like that to evaluate the logging data and then reporting monitoring plotting is very important because well data in CSV files is nice but I am a visually guy so it's just I want to see some plots some data visualized you can use different tools like matplotlib and on top on its c-burn for doing some simple regressions or you use bokeh where we had some talks around here. Okay documentation is also important I just wrote down which parameters go into functions as a comment but you can put good documentation you can use things and also if you have got some productive system or testing system you want to deploy code at the moment we I'm using Ansible there's a nice talks around here and you can just look into the web pages while putting an autogas what do we have we've got a prediction module a replenishment rules put together to a replenishment module and we've got a stock simulation which is very very simple but it's easy to enhance it and we've got a simulation and what we also would nice to have is documentation logging monitoring tests and configuration we have or I showed you that and deployment and reporting reporting means customer wants a daily report of how many items were out of stock how much was left over in the shelves what's the average average amount of items ordered stuff like that because usually if you send him the data and he puts it in the database sometimes it's very hard for our customers to get the data out so reporting plays a big role in in other companies we we don't like too much reporting stuff because it takes time you have to prepare data and usually we have to look at the data ourselves because it's as a cross check with our customers so the customer says okay the access rate was 12% and we say okay we've got monitoring reporting no we just made a plot it's 8.5% you're doing something wrong and you always have to cross check with your customer because in the real world while simulating is nice but I'm double-checking is even better okay well what we have we've got a simple replenishment framework with a basic solution for an automated order forecast system in the case of everyday order and everyday replenishment and it's quite simple to put this together with the tools pysons then a library pandas and so on and lump I give gifts to you and well what's the lesson well writing pyson code is easy and you can use it in in everyday life and you can use it to to produce real value for companies because throwing away goods or shortening delivery times is very it's very good for our customers and they're quite thankful for us that we are applying them with good prognosis and good forecast so thank you and visit us out our plunder booth I hope you see you soon and if you like to ask some questions feel free you
Philipp Mack - Python in the world of retail and mail order At Blue Yonder a lot of different python packages, provided by the community, as well as our own self-written ackages, are used in order to provide flexible solutions to our problems. In this talk I'll present a walkthrough of a generic python application example for demand and purchase order quantity calculations, putting together those packages in an orderly way. The example will feature real world problems derived from hands-on experience with our retail and mail order customers. Additionally the talk addresses the subjects of testing, configuring, parallelising and deploying the code.
10.5446/20184 (DOI)
Hello, everybody. There's this famous interview question that says, you type python.org into the web browser and press enter, what happens? This talk is a bit similar. It's about what happens when you try to import some random module. Lots of stuff happens. A little bit after I submitted the talk, I learned that David Bees led a three-hour tutorial on this, on this year's Picon. I'll try to look at it from a different angle. If this talk is not enough for you, then there's lots more material you can use to learn. More than a deep dive, it'll be like a guided tour through what happens when you import something. But hopefully, when we're finished with the talk, you can take deep dives through the source code yourself. What happens when you execute this command? Under the covers, there's a global tender import function that gets called, and the result from that is assigned to a variable. That's pretty much all that happens. The import statement is a little more powerful than that. It kind of evolved over the years, so you can do sub-package imports with these dots. You can import stuff from modules. The mapping from this to the DunderImport function is not always trivial, but it's documented pretty well in the docs. If you want to do that, then read it. All the DunderImport function is an interface to the import machinery, which nowadays is all written in Python. It's in the import module. If you don't want to do this when you import something programmatically, there's also a convenience function called import module that is much better to use. If you have a string with a module name, just use that. It's also just an interface to the import machinery. The other thing you can do with DunderImport is replace it with your own function, but that is not very useful because then you have to reimplement most of the machinery yourself. It's not useful to call DunderImport. It's not useful to replace it, so it's probably better if you just forget about it. The import statement called the import machinery. I will talk about what the import machinery does. I'll skip all the locking and caching, error handling, and all the Python, all the stuff that takes most of the library, but it's not really necessary for you to know what's going on. The basic algorithm for what happens when you import something is actually pretty simple. It looks like this. The first thing you get is this cache. There's this sys.modules dictionary. If you import a module that already has been imported, it's stored in the cache, so when you re-import a module, you get exactly the same object back. There's a cache to this. When you delete something from the dictionary and then re-import the same module, it's gone from the sys.modules, so the import machinery thinks it's never been imported, and it imports the module again. You get a brand new module object, and every function and every class in there will be brand new, which most of the time is not something you want, because stuff doesn't expect this. But you can do this. The other thing you can do is poison the cache. You can just assign anything to sys.modules. You can put a string in there, and when you then import it, you get a string as a module, and you can use all the string operations on it. Some modules actually use this to make modules that are callable or subscriptable or have arbitrary attributes. There's some limited use to this, but maybe you shouldn't do it in production. So that's the first statement. The second, there's this fine spec function that takes the name of the module and the path. In most cases, the path will be sys.path, which is just a list of all locations that modules can be imported from on my system. It's usually much longer than this. For details on how it's constructed, the SIDated Beasley-Stocky talks about it at length. With these two, I call the fine spec, and that gives me a spec object, the module spec object. That is just a description of how the module will be loaded and where it will be loaded from. There's actually a utility function that you can call to get the spec without importing the module like this. The module spec gives you the name, the loader, which is the strategy how it will be loaded, and the origin, which is where the module will be loaded from. You can do that without importing the object, which might be useful at some times. Also, the module spec becomes a permanent record of how the module was loaded. With any module, you can look at the spec attribute and see where it got loaded from. The next step is at the actual loading. We'll look at it in a little bit more detail later, but what happens here is an empty module object is put into its modules, and after that, it's initialized. It's important that it's done in this exact order. First, it's put into its modules, and after that, it's initialized, and all the functions and classes get assigned to it. After that, the machine looks in its modules and returns whatever it found there. This is a simplification, of course, but you can already use it to solve real-world problems. For example, import cycles. Everybody's favorite thing when it comes to importing, as I've learned. We have two modules here, and one imports the other, and the other imports the first one again. This is a very bad thing to do. It usually results in errors that are not so nice, but if you know this algorithm, you can reason your way through what is happening. If I import foo, it checks its modules, doesn't find foo there, so it finds the source code for foo and starts loading it. It starts going through it. Well, first, it puts it in its modules, and then it starts going through the source one line by line. The first thing it finds is import bar, so it goes to import bar. It doesn't find it in its modules, so it puts an empty module object in its modules and starts going through that. The first thing it finds is another import, so it tries to import foo. It looks in its modules, and it finds foo in there because it already put it there, but it's not gone through all the initialization yet. We get a half-initialized foo object here, and then we try to call this function, which Python didn't see yet in this initialization, so this falls with an attribute error. And the whole thing fails out. You get an import error, and you start looking where the error is. It's not so obvious. There are some tools that can detect these import cycles and warn you which you should use. And the best way to solve this problem is probably to take the functions that both modules need and move them to a different module and put that. But if you ever run into this situation, you already know how to solve it. Okay, so here we go. You can see I left some space here because there's obviously something more, and there's something more has to do with submodules and packages. So let's go through a little bit of vocabulary. Our random module was a top-level module. You can import it directly. So is URLlib, for example. But URLlib also has some other module below it. So URLlib is a package. It's a parent of URLlib parse, and URLlib request, and URLlib response, and those are submodules of this URLlib. Everybody clear on that? I hope you knew that already. So what happens when I try to import a submodule is, first, the path is different. For submodules, the path is not in this path, but the path is taken from the parent. So the parent has this path attribute, and that says where all the submodules are loaded from. And the second thing that's different is these two parts. So for submodules, the parent is always loaded first. There's no way to load URLlib parse without loading URLlib. It's always done first. And if loading the parent somehow causes URLlib parse to also be loaded, at this point, it's just return. Otherwise, it's imported normally. And at the end, after everything's done, the submodule is set as an attribute on the parent. So if you import URLlib.parse, the object you actually get is URLlib, but it has an attribute parse on it that you can get by the dot because it's set as the attribute at the very end of importing. So there's the more complete algorithm which you can use to solve or reason about more complex situations that involve submodule loading. So for example, if I have this simple package in a dot pi with two imports, some constant value and some code that uses it, and I try to import that. So what happens? The parent module is always loaded first, no matter which one of these you import. So first it looks in this module. Forfoo doesn't find that. It goes to find, it goes to look up the source and execute it line by line. So here we go. Goes to the import function, import statement, which invokes the machinery again. Looks and sys.modules for foo.main. Doesn't find that because it's a submodule. It goes to load foo, which was already loaded. It's already in sys.modules. So it returns early. And it goes to executing the code. It gets to this import statement. And it tries to import foo again. Looks and sys.modules. Some module is there. So it returns that. And then we try to use it. At which point we use the foo module, but it doesn't have the const attribute yet because that gets initialized at the end of this import that we didn't get to yet. So once again, you get in there. And this is kind of complex. And you have to understand this algorithm, which arguably is not that hard. But if you have bigger modules, then it gets a bit complicated. So I've prepared a set of little rules that you should follow to be okay. First of all, your init should be kind of a public interface to your package. So it should just import stuff from submodules, maybe set it under all, and do nothing else. And then your submodules should not use the public interface. They should import directly from the submodules that you want because you probably know about the internal structure of your package. And obviously you shouldn't have import cycles in the submodules themselves. So if you follow these rules, you should be okay. Otherwise, you understand this algorithm and you can reason your way through. Okay. So that's for this. And maybe you're wondering what exactly this fine spec does. So let's look at that. Let's look at first the result. Where do you actually load a module from? So if I import my random module, I can print it out and I see it's loaded from some location on my system. I can look at the dunder file attribute and get the same thing back as string. But if I import another module, say sys, print it out and I see it's built in. I see it doesn't have a dunder file attribute. Does anybody know where the sys module is actually located on your system? No. Sys is actually built into the executable itself. So in my case, it's under use of N503. It's built into the actual program. Yeah. But all the other modules are in this place. So these are two different types of modules. We have a look at the aquarium of module types we can see. We have the built-in modules which are written in C and compiled into Python itself. We have some source modules which are written in Python and loaded from files. And we have some other types of modules as well. We can have extension modules which are written in C or some other compiled language and loaded from a file or shared library. On my system, that's math. For example, some NumPy core modules can be extension modules as well. And the fourth type is frozen modules which are written in Python and compiled into the executable itself. One example that everybody uses is frozen importlet which is a copy of the import machinery that's built into Python for loading the real import machinery because you have to use the import machinery to read stuff from files. And stuff like app or Py2Xa actually compile Python modules into the resulting executable to make one file executable. So that's a use case for that. So how do we load all these different kinds of modules? There's this list of strategies to use in sys.metapath. The algorithm is quite simple. We just ask each of these finders in turn if they can load our module. So if I'm loading the sys module, I ask the built-in importer, hey, do you have a sys module? And the built-in importer looks at the list of built-in modules and says, yep, here it is. Here's the information. It gives me a spec for it. If I'm importing random, I ask the built-in importer. It doesn't find a random module in built-in modules. So I ask the frozen importer. It doesn't find random in the list of frozen modules. So I ask the path finder. The path finder is a bit more complicated. This is the thing that looks at sys.path. It goes through every entry in syspath in order. And for every path, it has what is called a path hook. The algorithm looks like this. So it will go to the current directory and construct a path hook for that. So zip importer can't handle directory. So it's skipped. But there's a file finder which can handle directory. So that one is used for the current directory. And there we look for these files. And we probably don't find any of those there. So we'll go to the next entry, which is a zip file. We ask a zip importer for this zip file if it can find any of these. It can't. So we go to the next entry and ask the file finder if it can find any of these in there. And it can. It's there. Grandum.py is actually in this directory. And since the file is there, the spec is returned. Now, at this point, we have a spec. And when we actually have a spec, we don't look any further. So when the file exists, the spec is returned for it. And the machinery doesn't look any further in the path. So the first match wins. And what's in the module spec? Again, we have the name. We have the origin, which is the source code to load. We have a location for the cache file, which may or may not exist. We have the loader, which is the strategy to use to load the source. And some other loader-specific information. You can read all about this in the PEP that I will link later. Yeah. So that's how you get the spec. And we have a bit more time left, so I can talk about how to actually load a module. So once we have the spec, the loading is kind of simple. First, we create a module object. And a module object is nothing special. It's just an object that has a name attribute, an under name. Either the loader can create one, or if it doesn't want to, then we create a default one. After that, we set the initial module attributes, which are actually just copied from the spec. So the spec gets copied to the under spec. The name gets copied to the under name. So now we have two places for each of these bits of information, which is kind of redundant. And you can change each one of these individually, so it's a bit of a mess. But one of these is always used later. And after that, we put the module into the modules and execute whatever source code we find. The global variables are actually just attributes on the module objects, which is kind of fun to play with if you import the main module. You can assign a global variable, get it back as an attribute, or vice versa. And this is also where the under name comes from. It's assigned very early in the loading phase. So by the time you get to executing your code, it's already there. You can check what it is. So, yeah, that's executing the module. And one more thing I have is how to actually get the code for a source module. So in the module spec, we have both the origin, the py file, and the cache location. And if the cache location exists, and it was compiled from a matching py file, it has the same size and same modification time, then byte code is read from the cache file and execute and return executed. If it doesn't match, then it's read from the origin file and potentially stored in the cache. If you're familiar with how Python 2 did this, the origin and cache were in the same directory. It had the problem that if you deleted the py file, then the pyc got executed. So this is a zombie that, for some reason, was there and did the same thing as a deleted file, which used to throw off a lot of beginners, not only them. In Python 3, we have the py cache directory, which no longer has this problem. So in the py cache, we have the pyc, but if the py is not there, the cache isn't even looked at. What you can do if you really want to load things from pycs is copy the pyc over to the old location and delete the py. It will actually work. And this is all the code. It's just a screen full of what you have to understand. And if you want any more details, the import lib is installed on your computer. So you can just look at it now and see what's going on. Thank you. Thanks, Peter. Do we have any questions? Thank you for your talk. I would like to know if there was a use case on being able to load a source code from a zip file. Where is the use case on loading code on zip files? From pyc files? No, no, zip files. Okay. When I showed you the different kinds of modules, I wasn't really complete. It looks like this. So you can load from native code, for example, written in C. You can load from Python code or byte code. You have built in frozen extension source and sourceless, sourceless are the pycs. And you can also load source or sourceless files from zips. And this is done for ease packaging. For example, some Windows users don't like deep directory structures where you have lots of files in directories. So you just zip those all up into nowadays it usually has the pyz extension. And you can import directly from that. You can actually run those. The pyz is assigned to Python. And if you have an under main module in there, you'll actually run it. Also on Linux, if it has the shebang, it can run those. So it's just an easier way to package things. You just download one zip file and everything's in there. Any more questions? If there are no questions, I can find something else to talk about. Do we have time? A few minutes. All right. So one thing I forgot is this create module and exec module. So this is for Python modules. For C modules, like the extension or built in ones, everything happens in create module. There's a py init hook function that creates the module and also initializes it at one step. And then this exec is just a no up. That's nothing. So that was that is the current situation with Python 3.4. For Python 3.5, there is a new mechanism that does something similar to the Python modules. So the create creates an empty object and then there's a separate exec that you can do your work in. Which is better because at the time exec is run, the module object is already in this modules. For example, what could happen before is if you run some user code, run some Python code and it tried to import your module again, you would get into an infinite loop because it's not in the cache. So it would try to re-import your module again. And there's the loading is a bit more declarative now. It's in PEP 489 and you can go read that if you're interested. Yes. So there's work going on in this area still. And I hope the talk won't be obsolete in a few years. Hi. I was just wondering what would happen if you loaded a module with a class in it? Once again? You load a module, it's got a class in it. Right. Instantiate that class. And then you do that trick that you said you shouldn't do at the start where you re-initialize that module. Yes. So what the re-initialization or just reload does is it creates a new module object, creates new class objects. But every instance of an existing class has a reference to the original class. So all the old instances would use the old class. And all the new instances would use the new class, which creates some problems. For example, if you try to check for equality and it's implemented by looking at the class, then the classes obviously don't match and you have a problem because you think there's the same and the string representation is the same but the class is actually different and who looks at the class idea, right? So there are some use cases for this but it's usually better to stay well away from it. Okay. Thanks very much, Peter. Thank you.
Petr Viktorin - Import Deep Dive Whatever you need to do with Python, you can probably import a library for it. But what exactly happens when you use that import statement? How does a source file that you've installed or written become a Python module object, providing functions or classes for you to play with? While the import mechanism is relatively well-documented in the reference and dozens of PEPs, sometimes even Python veterans are caught by surprise. And some details are little-known: did you know you can import from zip archives? Write CPython modules in C, or even a dialect of Lisp? Or import from URLs (which might not be a good idea)? This talk explains exactly what can happen when you use the import statement – from the mundane machinery of searching PYTHONPATH through subtle details of packages and import loops, to deep internals of custom importers and C extension loading.
10.5446/20181 (DOI)
Thanks everybody. Welcome to my talk, Building Nice Command Line Interfaces. I look beyond the standard library. I am Patrick Mubauer. I'm a back-end developer at Blue Yonder. I'm working in Kalsway. And this is my third Euro Python now, but the first time as a speaker. I'm pretty excited being able to speak here today. And what I want to talk about are command line interfaces. And I want to show you a little demo in the beginning. It's called Brewery. I thought I'm originally from Bavaria. I thought German guy from Bavaria. I do something with beer. So this is a typical help screen. I think everyone here knows what a command line interface and such a help screen is. So you see a usage line. You have a little description. You see you have some options and of course some subcommands. And yeah, you can call these subcommands. You have separated help pages for your subcommands. This would look like this. So here we have a list command you can see we are able to list some of the beers. So this would like this. Okay. Let's see. And these are actual beers from the Augustina Brewery, famous brewery from Munich. And we can also list, we have seen we have some other options. We can list our own beers. We don't have any by now, but we can buy a beer. So let's look what the buy command looks like. So we have an option name and an option count. So let's use this. Let's buy a six pack of beer. So now we have six bottles of pills. Yeah. So that's just a short introduction. So you get the idea what a command line interface is if you don't know already. And what I want to talk about is if you are new to Python or programming, you might wonder how to start. So as I started programming, I ended up passing sys.rqv by hand. And I have seen others doing the same. So sys.rqv is just a list of the command line arguments. And if you try to pass that by yourself, you end up with pretty ugly code. So all you start writing your own parser, but yeah, there are already some like getopt. Getopt is an implementation. Well, it's the implementation for Python, but it's actually from C. And if you are familiar with the C programming language and have used getopt, it might be a good fit for you, but I don't recommend to use it. It's not really nice. So if you want to stick to the standard library, you might find op pass or arc pass more pleasant. I have to say op pass is deprecated by now. Arc pass is the newer one. And yeah, arc pass, I think it's okay. It does its job. It's pretty straightforward. But if your application grows and you have multiple subcommands, you always have to create all these subpassers and the code starts growing not that nice. So if you think so too, you might be wondering if there's more. And yeah, there is. So I want to show you today three libraries which are out there. Click, docopt and cliff. The first one is click. It's a project created by Armin Rohnacher. I think a lot of you know him. He's quite popular in the Python community. He has created, for example, Flask and Ginger. And he was also not that happy with the solutions already out there. So he decided to write his own and click was born. And click is a decorator implementation. So you use decorators to mark your functions as commands, for example. And click is highly configurable. And it also comes with good defaults. So docopt has a completely different approach. For docopt, you have seen this help screens. For docopt, you just write these help screens and then you already have your pass. A docopt will pass this doc string and then you get your arguments as a dictionary. So the third one is cliff. It's from a developer, from OpenStack. And cliff is more like a framework for multi-level commands. Something like git, for example, where you have git add, git push, and so on. And for its subcommands, it uses setup tools, entry points. So you define an entry point in your setup pie and then cliff will find it. And yeah, I will show you that later. And it also comes with output formators. I have a little demo. It's better I show you that later. So let's start with click. How does it look like? A minimal example would look like this. You just have a function and decorated with the command decorator. And yeah, you're done. Then you could call your script and you already would get a help message looking like that, which is the starting point for docopt. So docopt, you start writing this help message, then use the docopt function, put the stock string in it, and get back the dictionary with the past arguments and options. So for cliff, this is a little bit more code, but actually it's not that complicated. There are two important things here. It's this app class and the command manager. So you have to subclass from app and then you can, you have to pass a command manager to it. And this command manager is, yeah, the command manager says how subcommands should be found. And the default command manager here finds subcommands via set up tool entry points, like I already said. So it looks for set up entry points in the cliff.brewery entry point names. So subcommands. For click, you have another decorator called group. So now you say your run function should be a group. And then you use this group to decorate other functions as commands, as subcommands, which belong then to this group. It's just these two lines of code and you have your first subcommand. So in docopt, it's a bit more complicated if you want to have separate help messages for each subcommand because you could, what you could do using subcommands is just write multiple usage lines where you can say instead of command, you have list, buy, and so on. But then you would not have separate help messages. So what you can do is you can implement your subcommand in another module, write your docstring there, and then check here if the command you used is for example list, import the module and then put its docstring to docopt. And using docopt with subcommands is not really nice, like you can see probably. So, and now cliff. Here, in your setup pie, you have this entry points key words argument. And now you say again, this cliff.brewery entry point group and to define a subcommand, you say for example list equals brewery.commands, which is the brewery package in then the commands module. And in the commands module is a class called the list command, which would look something like this. So you just subclass from cliff command. And then you would have to overload the take action function. The take action function already gets and passed the past arguments as arguments. And these are an arc pass namespace object. So cliff uses arc pass internally. So next, yeah, if you want to use, you probably want to use options and arguments. In click, it's again just a decorator. You have an option decorator. So here we say we want to add in debug option to our run group. And we also say it should be a flag. So it's just a Boolean value in the end. And the same for arguments. You have an extra argument decorator. Here we say we want filter argument. And this is then also passed to the list function. So in doc opt, again, it's just writing your options and arguments in the doc string. For example, here, if you have a sub command list, which gets this optional filter argument, optional in such a help message are these brackets. And here's also an example how such a dictionary would look like, what you get from the doc opt function. So in cliff, you would have to overload to get a parser method. There you always have to retrieve the parser, which is an arc pass argument parser. And then you just use the add argument method to add the filter argument. So here the in get parser, it's just arc pass. Okay. Another feature, perhaps you don't want, only want optional arguments, perhaps you want repeating arguments in click. You could achieve this with the keyword argument n arcs in the argument decorator. So if you use minus one, for example, this means you can put an infinite number of filter arguments to your command. And filter would then be a tuple of strings, not just a string. In doc opt. Again, just documenting it. So in such a help screen, repeating is are these ellipses, these three points. And the filter in the arcs dictionary would be a list. Okay, arc pass has also an n arcs keyword argument, but here it's not a minus one. It's an asterisk. And yeah, that's all. So another thing defaults. So you don't want to specify a value to every option all the time. So for example, if we, for our by command, we could say if we don't specify a number of bottles we want to buy, then just buy one. So this is done in click by the default keyword argument. And you also have a required argument, if you really want to say this option should be given a specified. In the case of a default one, you have to add this default one to the end of the option help string. And same for required. And yeah, that's all basically. Okay. For arc pass, you have to add this default one to the end of the option help string. And same for required. And yeah, that's all basically. Okay. For arc pass, again, it's the same like for click. Just the required and default argument, keyword argument. One difference here is that click uses, if you specify a default value, click uses the type of the value as default, as, yeah, type. So if you would use count with something else than an integer, you would get click would raise an error. And so what you probably also want is type support to validate your arguments and options. So in click, you have a type keyword argument and click also has nice utilities. For example, custom types. Here I used an int range as type because buying zero or less bottles would be nonsense anyway. So I said it should be between one and infinity. Dock opt does not have any types. Everything is a string or a bool. And you have to do the checking all by yourself. For arc pass, you also have this type keyword argument. It's working basically the same like in click. Okay. So another feature, if you want to automate things, for example, you might want to use environment variables. And in click, again, it's just another keyword argument. So you specify an N for and in the example you can see, if you don't have this N for set already, the default value would be used. But if you set this N for, then the N for is used. And doc opt again, does not have support for environment variables. And also arc pass does not have built in support, but you can do this little trick. You just set the default value and use the environment dictionary. And then you have basically the same effect like in click. You probably also want to test your stuff if it's... And in click, this is really nice because click comes with an extra testing module. And in this testing module, you have a key runner. And if you... With this key runner, you can just invoke your commands. And then you get back and result object. And this result object then holds the exit code and the output. So testing what a command returns is really easy. Using other stuff like arc pass and so on, I don't have a really good pattern. So if you have one, I would be glad if you come to me. It's... What you find in the Internet is always the same. You just call your function and then... Probably some people use the sub process module to call just your script and then pass the output and so on, but that just does not seem right. What I did here is I used PyTest for testing and PyTest has this capsis fixture. And if you use this fixture, it captures the output and then you can test the output in a similar way than to this click solution. Okay. So one last thing for Cliff. Cliff has... I already mentioned these output formators and Cliff has special sub commands. For example, this list is just a subclass from command. And I probably show you... It's best I show you this. So here's Cliff implementation. You also have a list command and if you use this list class I showed you, then you would get this ugly output. So it's a table. It's really awful, okay, but you have this extra option which comes with Cliff. You can say you want it to be fitting. It's... Okay. The resolution is not really nice here, but you can see you get a nice table just by doing nothing by yourself basically. What you have to do is you have to structure the data you want to show in this table. So it's just one header row and then multiple rows you want to show and then... Yeah, that's all. So... One other thing. Cliff is pretty cool when using plugins. So here we can see we don't have a drink command which is really sad. So we can just go... Well, think of this brewery drink like a plugin and here we have this entry point defined. So we have here drink command in this plugin and if we install it now, then, yeah, now we have a drink command. Nice. So, yeah. And sorry, my favorite is click. It's really nice working with click. You have a lot of utilities. It's very robust. It's just fun to work with. DocOpt is nice if you are already familiar with these help messages and you really want to write nice documentation and so on. But I think for bigger applications it's not that good. And, yeah, Cliff, I really like these output formators and Cliff. It's really easy to get a nice table or what I... Oh, sorry, what I have not shown you is you already... You also have multiple formators. For example, you can say you want the output as CSV or there are also extensions to get Jason or Yammel and you can also specify the columns you want to show. So if you just want the beer names and the descriptions, then, yeah, you would just have two columns. So, and I think we are out of time and that's the end. So thanks for your attention. If you have questions, just ask or ask at the coffee. Thanks for your attention. If you have questions, just ask or ask at the coffee. Well, thanks for your talk. One common usage in options is to have the long option with hyphen hyphen name and the short one with maybe hyphen n. I don't recall seeing any particular examples about that, but I assume all of these tools would support that easily. What you mean is you want to have something like this and this? Yeah, sure. You could use either one for the same thing. Yeah, you can do that in all three solutions. Okay, great. Thanks. More questions? Hi. I'm just curious. Have you seen the Google's Flux Parts in Library? I think it's called Python G Flux. And if you've seen, what's your opinion about it? No. Okay. Easy. Do they have support for colors in the output? Sorry. Do they have support for colors in the output? Support for what? Color. Yeah. For color. Oh, yeah. So I click this echo function. I used it on one of my slides, I think. It's basically something like the print function, but it has a lot of stuff internally to handle Unicode for you on different platforms and so on. And it also has a keyboard argument for colors. And yeah, then you would get colors. I think in docopt and cliff or ArcPass, you don't have this built in. I would like a tool that reads all these complex definitions from a configuration file and not do them in the code inline. Which is the best one to do that? Or maybe it's not an option. Okay. You mean for docopt or what? For each of these tools, I don't want to write long lines of codes. I want to be able to add a function, go to a YAML file and add a new command and be done with it. Sorry. I don't know if it might be possible somehow, but I don't know. More questions? Thanks for the talk. Do you have any good solutions for interactive command line interface? Interactive. Cliff has interactive built in. I think, do you mean something like that? But that's the only thing I know. I don't know if there's something else or for click. Sorry. Yeah, so I actually have an answer to that. So we use Cliff as a tool. Click is actually capable of doing very, very great interactive command line prompts. So it should be just click prompt and then you have different options so you can do choices and such. Yeah. Tomorrow at quarter to 12 my talk, I will talk about this. Yeah. I think I got the question wrong. You have this prompt, a click.prompt and then you get just prompted for an option. Is that what you mean? Yes. A click has it. More questions? Yeah. I do have one last one then. In your last projects, which one of these tools did you use? I personally use most of the time click. Okay. But Cliff is actually quite new to me too. But I really like these output format stuff. So if I have a project where I could need this, just creating CSV file out of some other data or so, then I really think it's a really good fit. Okay. And I think we're out of time for any more questions anyway. So we will let you go to the coffee break and thank you again Patrick.
Patrick Mühlbauer - Building nice command line interfaces - a look beyond the stdlib One of the problems programmers are most often faced with is the parsing and validation of command-line arguments. If you're new to Python or programming in general, you might start by parsing sys.argv. Or perhaps you might've already come across standard library solutions such as getopt, optparse or argparse in the official documentation. While these modules are probably preferable to parsing sys.argv yourself, you might wonder if there are more satisfactory solutions outside of the standard library. Well, yes there are! This talk will give you an overview of some popular alternatives to the standard library solutions (e.g. click, docopt and cliff), explain their basic concepts and differences and show how you can test your CLIs.